Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
AbstractPhil 
posted an update 2 days ago
Post
214
The small projection-based approximator model for the geolip patchwork did not breach a certain level of accuracy as required by my specifications, so I've defaulted to harvesting direct geometric information from AI models until I get the comparative bounds required for a useful topology.

I must sincerely apologize for not solving this problem quickly.

This will take time. Without the approximator it's going to be considerably slower, but this model I begin training will be providing the approximations in a different way over time. As iterations progress, the system will conform to a huge array of geometric potentials and be capable at predicting those, but it will not be as powerful as the full patchmaker up front, and it will be slow training.

If I can get my hands on a cluster of A100's or H100's for a measure I'll make a post immediately, until then I must default to the slower process.

I really banked that the smaller version would have worked, but it simply couldn't hold complex topological shape without the correct boundaries being learnable AND endure entropic decay simultaneously. The only way to have a predominant shot at a full geometric shared language, is to make those boundaries learnable in the full spectrum of potentials, or at least more than I have placed on it.

I'll be refining my process in the coming days further, and I do apologize for pre-emptively announcing a potential that I have yet to fully explore.

There will be a full upgraded 38 shape geolip patchwork trained asap to fully encompass the Flux 1 AE spectrum, and another trained for SD15, SDXL, and Flux 2's VAE as well. These will accommodate DIRECT complex geometric patchwork learning, but not to the scale as promised yet. Autoregression is a complex mistress as many of you know, and I will be spending a great deal of time and compute analyzing all of the information required to build a uniformly useful and powerful autoregression patchwork to utilize as invariance to teaching.

听起来你们的工作很有意思,加油

After a very long set of days, with multiple setbacks, I have found a potential direction using a type of modulation attention I haven't named yet, in direct association with transformer structural boundaries.

This attention is essentially based on a form of geometric modulation and gated based on differentiation. This is likely one of the building blocks for a replacement to a hard-trained set of weights - instead formatted into one of the first legitimate safety-nets built specifically for geometric attenuation.

Experiments show a multitude of potential limitations. Those potentials are destroying certain objectives and combining others into new processes, rather than letting the original design sit in concrete. Everything must conform to the math, not the math conform to the everything in this structure.

The entire concept here is narrowing down the problem into a regressed solution that makes the most complementary sense to the least potential requirement of hardware in order to achieve the necessary goals.

https://huggingface.co/AbstractPhil/procrustes-analysis

You can find my current task-oriented experimentation stored here. As I deconstruct the models into their subsequent boundaries I accumulate a manifest of information and data. This is entirely meant to build that very same geometric structural awareness that models require to be stable.

I've discovered multiple very tight bottleneck points that uniform among models with the multitude of analysis I've ran. There are some that likely form based on the law of averages, and there are others that form... well, they are mostly the same among all models - but they are not the same for every model so I can refer to those as semi-constant. I've found some constant spaces, and some constant point of ranges, but I need to test more models and I need to test larger models.