An open message to Mark Zuckerberg: The size does not matter

There are major requirements for the success of any digital product-easy access, interface and easy-to-use experience.
If this is not the case, Linux -based OS systems will outperform MacOS and Windows.
This is also why the GPT, Claude and GROK are victory on open source LLMS like Lama and Mistral series, even when the latter provides some amazing benefit with great customizations.
I think Openai and Deepseek are presenting interesting case studies here.
Deepseek creates open source foundation models, but if it also does not provide services directly through its web interface and the iPhone-app there is an opportunity for fats that can disrupt American markets the way you did.
Just as Deepseek did before, it could have remained a product that NERDY Devs spoke about as the “real deal” in their small circles, as it was happening months before the mania that surrounds the Chinese company.
Openai has similarly built open source models in the shade for about seven years between 2015 and 2022-until ChatGPT was launched. I’m sure you remember how this was.
To llama, or not to llama
One of the most prevalents for the most popular source of open source models is technical skills-that one is more clear.
But when it comes to llms – it’s actually money too.
As you can see, operation of open source models require graphics processing units. Some small open source models can work on consumer graphics processing units such as those I have in the best M3 Max MacBook Pro lines with 36 GB memory.
But others require that custom.
Meta dropped this weekend a series of Llama 4 models, including MAVERICK and Scout Llms, and announced plans to release a giant later.
There are no updates yet in the style of thinking from the fourth series, with the exception of Lama, the appearance that tells us that it will “come soon”.
Below is what raises the eyebrows most about Llama 4: Meta continues to chase more and more parameters in LLMS, the cost of obtaining devices to operate these models becomes ridiculous.
It is a bit early and I haven’t analyzed all information available on developers who are trying to run models on different devices, but it seems that the minimum requirements for operating the lower scouts model is the NVIDIA H100 graphics processing unit, which cost about $ 40,000- Provided that you can manage your hands on one.
If Sam Tamman, with hundreds of billions of dollars, is struggling to find graphics processing units, as well as this founder of poverty startup.
A mixture of experts
After saying this, there is something interesting that makes it possible to turn on the Llama 4 line on Apple products – Mac Studio with a memory of 128 GB or higher.
This is a mixture of experts.
Some previous LLMS was actually one model that was trained on a full range of data from all fields such as GPT-3 or the original Llama. But companies quickly turn into a mixture of the concept of experts.
This means that although we see Llama 4 Scout as one model we talk to, it actually decides between 16 separate trained models that responds to the query, based on whether we asked for a question in mathematics or his request to raise creativity.
This differs from the traditional dense models that work on one homogeneous networks, where all LLMS parameters have been activated for each query. So, even if I asked her, “What is 2+2”, he activates all his knowledge of Socrates and Plato’s philosophies.
Dear Zouk, the size does not matter
Set aside the Llama 4 series operating difficulties, even those who tried it (most of them through GROQ/OpenRrouter) are less likes.
Llama 4 series does not work great in coding or deep questions – but it seems that she loves emojis (and I).
Therefore, there, even as companies continue to increase parameters in the founding LLMS training, which do not seem to improve things.
In fact, it may have opened a major job opportunity in which we thought that it was closed so far. Training more specialized forms for the field.
The researcher also noticed from artificial intelligence Andre Burkov, if the idea of your work in the fields of mathematics and coding or answering real questions is not. There is a great opportunity for Build your own data collection.
The possible increase in public models skills will not be a threat.
So, is it the time when we make our LLM in Dzambhala Finance? Perhaps, but we need enough revenue to maintain a larger database.
This post from the artificial reinforced newsletter is re -published that comes out every week.