Time penetration, old memory systems and I-Cings inspired by I-Cing
Memory is the real bottle neck in artificial intelligence and computing. Whether you are training large language models, manages complex simulations, or organizing microscopic services, it is often RAM, not the CPU, which dictates the true scale of what you can achieve.
So, what if you can reduce the imprint of the active memory significantly? What if you can do “significantly less remembered” as huge data groups continue to break? Ryan Williams from the Massachusetts Institute of Technology has proven that this is possible, as it resolved a 50 -year -old puzzle in computer science by showing this Any account that works in time R. It can be simulated with almost √R. space.
But here is what really fascinated me: Many original cultures were doing something significantly similar to thousands of years, which presses the vast systems of knowledge in human memory using the song, dance and story. I touched a little about this in my other article of cultural intelligence.
So this made me wonder: Can we combine mathematical elegance to penetrate Williams with the wisdom embodied by these old practices? Could the I-CINESESESSSSESSSASESASESSAS-a new Mental Mental model mainly? I ended up with the initial models that I call Table 64 highly excluded cells (HSS).
The result of Williams, expressing this more formally as follows:
For each function T (n) ≥ n, time[t(n)] ⊆space[O(√t(n) log t(n))]
… a big deal. Most of the previous timeline separation was theoretically or weak, but this breaks are narrow and constructive. Basically, any problem with a decision that can be solved by a multi -shape torning machine in time R (n) It can also be solved by a multi -shape Torring machine (perhaps different) using only o (√t (n) log (n)) space.
Practically, I understand that it means that you can reduce the active memory from R. To reach √t (In addition to some record factors). This is the difference between installing your models on self -hosted graphics processing units and the need to rent $ 20,000 servers. For engineers and developers, this is translated directly into:
- Cost savings: Less RAM, less than expensive machines.
- Expansion: Fitting more problems with the existing devices.
- efficiency: The implementation is likely to be faster due to improving the use of cache.
Besides raw numbers, I had an intuition that the dialectical plans might reveal something else for us. It may help to impose the site, predict the effects, and even help you correct the complex work burdens faster by providing a visual and visual map for your memory. Because we already have evidence of hexagonal structures that affect technology from cell towers, to Pathfinder algorithms, to spatial geographical analysis systems.
The penetration in the simple English language (and the symbol)
The old assumption in computer science was that if the algorithm took R. Time steps, you may need it R. Space to track her condition. Williams guide breaks this by proving that you can frequent Divide your problem into smaller parts (it is often used balanced breaks in flat graphs), allowing you to reuse the same cells of the logarithmic memory. Think about it in this way: Instead of the need for a unique note for each step of the step, you can develop a system that re -using a smaller set of notes efficiently by stopping by stopping by stopping by stopping by stopping by stopping by stopping by strategically stopping and storing it strategically.
What old memory systems are learned about data pressure
A long time before disturbing drives and drama, original cultures mastered the pressure of data through the “Old School” techniques:
- Song lines (in Australia): Wide networks of stories specific to the landscape. The song literally helps you remember the path, which empties the complex navigation data into an unforgettable pattern. Recently, we have seen the strength of the song appearing to improve the memory of moments with dementia patients.
- Dancing as a database: Some cultures encrypt the classifications, rituals and history in the designer dance sequence, and store information in muscle memory and common physical patterns. See research here.
- Story accumulation: The narrative layers of humans allow to remember the huge genealogies, legal precedents and procedures without written records, and to build complex information structures through the Zakri devices. We know that the original cultures flourished by the traditions of recounting oral stories. However, the modern version of this may be a MIM culture.
What surprised me is that these methods also re -use the limited work memory implicitly, leading to the complexity of the complexity into frequent embodied patterns. From what I can see, this sporting approach to Lilims is parallel to the memory reuse.
Why Sessus and I-Cing?
I was not optional for the I-Cing arbitrary. He was driven by three basic reasons:
- Optimal packing: Hexagons is known to have a tile space with minimal limits for each region. Think about bee cells or graphic structure. This engineering efficiency can be translated into a better memory regulation.
- Bilateral elegance: All Hexagram I-Cing is a perfect 6-bit dual code (yin = 0, yang = 1). Leibniz itself realized these pods before the appearance of digital computers. This makes her in nature accountable.
- Gray symbol: The transition between the neighboring Hexagrams means stirring only one bit. This “gray symbol” (a method of counting in a dualism where every step changes only, which makes it useful for reducing errors in digital systems) is crucial to low -head transformations between memory cases, which reduces the cost of switching switching contexts.
This mixture looked like a natural bridge between Williams’s return and a more easy visual memory model.
Building 64 excessive cells
I wanted something tangible that you could see and use, so I built a small primary preliminary model.
This is what he does:
- Work maps on the work burden on 6-D Hypercube (64 heads = 64 hexagrams).
- The gray symbol paths are used to maintain the transfers effective and reduce the “cost” when moving between memory cases.
- Imagine active cells, which are divided into lumping processes, which are reused.
- It allows you to simulate the burden of work in the recent time, allowing you to monitor memory behavior.
For example a work burden of 100,000 times, show the initial model Use of active memory about 0.6 square metersWith a slowing factor about 4X compared to the linear memory approach. Even in this early form, it is amazingly intuitive: You can visually track where your memory is customized and reused.
How can you use it in your projects
The concepts behind Hyper-Stack can be adapted to the various real world scenarios:
- Motivation of the form: Maps transformer layers, RNN cases, or other deep learning model components on hexagonal cells to perform more efficient examination and refine them.
- Camel systems: Fitting complex status machines and algorithms in the effects of small RAM feet, and decisive for Internet of Things and other resources restrictions.
- Correcting complex errors: Use the visual adjacent to Hexagrams to track the transitions of the status and the spots of abnormal cases or bottlenecks in your memory patterns more limit.
If you are eager to experiment, the scheduling code (Ping Me!), And can be adapted to your specific work burdens.
Open the following questions and steps
While the promise, there are always warnings:
- the √t Close binding. The gains of the real world will depend greatly on the specific structure of the graph and the characteristics of the work burden.
- Fixed factors in the return of the separation still add some general expenses that must be improved to practical applications.
- Any perception of Hexagram’s perception to correct errors, so far, anecdotal but guarantee more exploration.
Final ideas: ancient wisdom, modern code
This project started with a simple question: “Can you better inspire old computing systems?” After combing Williams theory and preliminary models for Hyper-Stack, I am convinced that the answer is “yes”. But if you are interested in exploring the boundaries of the algorithm memory pressure with me-or you just want to know how 3000-year-old codes can help you correct your symbol in 2025-Check for scheduling and sharing your notes.
Try the initial model here: Hyper64i.vercel.app
Hit me If you want to cooperate or detach the project.