Set up wireguard in a docker container and then forward the port to wireguard, the default container on docker hub is fairly straightforward and you can always ask me for help if you need :).
However, If you are using ipv4, you need to make sure that you’re not behind a CG-NAT (If you think you might be, call your ISP and tell them you have security cameras that need to get out or something like that).
You could also try tailscale which is built using wireguard with nat-busting features and a bit easier to configure (I dont personally use it as wireguard is sufficient for me).
After that Caddy + DNSMasq will simply allow you to map different URLs to IP addresses
dnsmasq
my_computer
-> 192.168.1.64
http://dokuwiki.my_computer
-> http://my_computer:8080
http://dokuwiki.192.168.1.64
-> http://192.168.1.64:8080
Caddy and DNSmasq are superfluous, if you’ve got a good memory or bookmarks, you don’t really need them.
VPN back into home is a lot more important. You definitely do not want to be forwarding ports to services you are running, because if you don’t know what you’re doing this could pose a network security risk.
Use the VPN as the entry point, as it’s secure. I also recommend running the VPN in a docker / podman container on an old laptop dedicated just to that, simply to keep it as isolated as you can.
Down the line you could also look into VLan If your router supports that.
I personally would not bother with SSL If you’re just going to be providing access to trusted users who already have access to your home network.
If you are looking to host things, just pay for a digital droplet for $7 a month, It’s much simpler, You still get to configure everything but you don’t expose your network to a security risk.
I think this combined with the solution provided in this comment Will be the most robust approach and solve all your problems.
That’s what I would do
Mobile offline sync is a lost cause. The dev environment, even on Android, is so hostile you’ll never get a good experience.
Joplin comes close, but it’s still extremely unreliable and I’ve had many dropped notes. It also takes hours to sync a large corpus.
I wrote my own web app using Axum and flask that I use. Check out dokuwiki as well.
I don’t think I would have made too much of a difference because the state-of-the-art models still aren’t a database.
Maybe more recent models could store more information in a smaller number of parameters, but it’s probably going to come down to the size of the model.
The Only exception there is if there is indeed some pattern in modern history that the model is able to learn, but I really doubt that.
What this article really calls to light is that people tend to use these models for things that they’re not good at because it’s being marketed contrary to what it is.
I think they all would have performed significantly better with a degree of context.
Trying to use a large language model like a database is simply A misapplication of the technology.
The real question is if you gave a human an entire library of history. Would they be able to identify relevant paragraphs based on a paragraph that only contains semantic information? The answer is probably not. This is the way that we need to be using these things.
Unfortunately companies like openai really want this to be the next Google because there’s so much money to be hired by selling this is a product to businesses who don’t care to roll more efficient solutions.
Well, that’s simply not true. The llm is simply trained on patterns. Human history doesn’t really have clear rules such like programming languages, so it’s not going to be able to internalise that very well. But the English language does have patterns so If you used a Semantic or hybrid Search over a corpus of content and then used an LLM to synthesise well structured summaries and responses, it would probably be fairly usable.
The big challenge that we’re facing with media today is that many authors do not have any understanding of statistics, programming or data science/ ML.
Lllm is not ai, It’s simply an application of an NN over a large data set that works really well. So well, in fact that the runtime penalty is outweighed by its utility.
I would have killed for these a decade ago and they’re an absolute game changer With a lot of potential to do a lot of good. Unfortunately the uninitiated among us have elected to treat them like a silver bullet because they think it’s the next dot com bubble
See also Mull, No 120hz though.