

4chan at least had a consistent brand of being the anti-social network and being full of Nazis, weirdos, pedophiles and people who are just anti-social for the lulz. You couldn’t ruin 4chan.
Twitter’s image was being the “internet town-square for serious thinkers” with politicians, scientists, journalists and a small but good measure of standard shitposters. Loosing that brand diminishes it’s value massively. Unfortunately neither Bluesky nor Mastodon was able to catch that clientele yet.
For a user without much technical experience using a ready-made gui like Jan.ai with automatic model download and ability to run models with the ggml library on consumer grade hardware like mac M-series chips or cheap GPUs by either Nvidia or AMD is probably a good start.
For a little bit more technically proficient users Ollama is probably a great choice to start to host your own OpenAI-like API for local models. I mostly run gemma2 or small llama 3.1 like models with that.
The market will segment away from the current tech anyway. CATL Sodium-ion with comparatively low densities but also extremely low prices per kWh will likely win the low-end market and the market for stationary solutions. This is just due to the much lower resource costs. The high-end will be up for things like this battery by Samsung (or other comparable pilot products). The current technology will likely be in a weird middle spot.
Depends on what you do with it. Synthetic data seems to be really powerful if it’s human controlled and well built. Stuff like tiny stories (simple llm-generated stories that only use the complexity of a 3-year olds vocabulary) can be used to make tiny language models produce sensible English output. My favourite newer example is the base data for AlphaProof (llm-generated translations of proofs in Math-Papers to the proof-validation system LEAN) to teach an LLM the basic structure of Mathematics proofs. The validation in LEAN itself can be used to only keep high-quality (i.e. correct) proofs. Since AlphaProof is basically a reinforcement learning routine that uses an llm to generate good ideas for proof steps to reduce the size of the space of proof steps, applying it yields new correct proofs that can be used to further improve its internal training data.
That is something that some tech savy Lemmy users could already easily do. I repost stuff from all over the web. But some systematic preservation of good old subreddits aught to be automated.