Rendered at 18:32:43 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
BLKNSLVR 8 hours ago [-]
Old school internet. Internet done right.
Great work giis.
I haven't used it, I didn't know it existed until now, but I'm happy it exists and has been providing service to those who need it. There should be more of this.
giis 7 hours ago [-]
Thanks, most of these came out restriction, we cant afford to throw money on horizontal scaling (adding more server,load server etc). So we kind of forced to try out new things to keep cost affordable. There are many thing left out on above doc: IIRC, we started with openvz and even today our security relies on SELinux, how we remapped user account creation with pre-existing templates for ext4 quota, we moved to xfs because of flexibility. Mysqldb quota/limits, fork bombs by college/school students bringing out docker environment. Old school internet is right term.
gk-- 4 hours ago [-]
restrictions lead to clever and optimized solutions. well done!
arjie 11 hours ago [-]
That's wonderful and I know why it's an Indian founder. Was so hard to get a remote shell back then. Indian debit cards didn't work online reliably and so on. So what's the hardware underneath? Cloud server or on-prem?
These days the world is amazing. Oracle Cloud gives you a ton for free. But perhaps there's some niche where this is useful. I have to say that this shared screen comms system is outrageously crazy, hahaha.
giis 10 hours ago [-]
It began as on-prem, Freston hosted in his house (we shared server cost, some people called it crazy, because I sent money to someone I met in Linuxforums.org and never seen this person, even via internet, I trusted him because I know him for few years on that forum) After 3 years or so we moved on to cloud servers. Mostly switching from one infra and another if we get some credits :D Couple of years we had Linode sponsoring those nodes until its acquisition.
>shared screen comms system is outrageously crazy,
Thats Freston idea. I remember our typically chat begins with something like
"Hey Laks, Can you see me typing!" ;)
xantronix 4 hours ago [-]
Constraints are beautiful.
caijia 10 hours ago [-]
UML is a smart call, and reminds me when I built an inventory and shift scheduling system on wordpress in 2017.
somtimes the "wrong" / "old" tool for some job is exactly right for you if you really understand it. UML is old but fits here.
15 years is long enough to call memory about a lot of things.
mikkupikku 8 hours ago [-]
To be fair. 8GB of ram is huge. I don't know, maybe I'm stuck in the early 00s but even 2 GB of ram still seems extravagant; I remember when that was an exotic amount of RAM for dedicated gamers to play extremely high fidelity games, so for a mere web server 8 GB of ram almost seems like absurd overkill. I still feel a tinge of shame whenever I see any software of my own using more than a few hundred megabytes. What a waste.
aduty 8 hours ago [-]
I remember when 16 MB was considered a lot. Then again, I also remember when graphics acceleration was considered optional.
seethishat 7 hours ago [-]
The major difference, here, is this is intended for multiple users (not one person). Imaging 5,000 users all using the device at the same time. The amount of memory, open file handles, network connections, etc. for many users at once adds up.
dhosek 5 hours ago [-]
The IBM mainframe that I used at UIC in the 80s had 64 MEGA bytes of RAM and about double the users.
giis 8 hours ago [-]
Until few days server ago was using 8GB and I did a cost cutting measure and its running on 4GB server for last week or so. :)
nkrisc 5 hours ago [-]
Depends entirely on what you're doing. 8GB of RAM is very insufficient for 3D texturing workflows, for example, where you can have many different 4k textures cached in memory. For other things, 8GB is probably a lot.
weird-eye-issue 4 hours ago [-]
Recently I had a laggy browser tab and I checked and it literally was using 7.6 GB of RAM.
andrewstuart 6 hours ago [-]
64K was huge when the Commodore 64 came along.
Twirrim 2 hours ago [-]
I barely used or remember the ZX-81 my folks had with it's amazing 1KB of memory. It had a 16K expansion module you could plug into the back, which apparently made a big difference, but also didn't have the greatest connection. You could easily dislodge it typing on the keyboard. I do remember my father coming up with various ways to try to secure it.
The ZX Spectrum that followed, with its huge 48K of RAM was night and day. The programs were so much more complicated.
Even echo on linux these days takes 38K of disk space and a baseline of 13K of memory to execute, before whatever is required to hold the message you're repeating.
Lio 2 hours ago [-]
Fixing the 16K RAM pack makes an apperance in the Micro Men film:
RAM was so tight on those 8-bit machines that many games used tricks like hiding things inside the viewable area of the screen to eck out just a little bit more.
Lio 4 hours ago [-]
Not sure why the down votes, this is true. If you only had 16, 32, or 48K then 64K seemed like a lot.
Hell, the RAM size was so important that they named machines after it.
harias 12 hours ago [-]
It's been a while since I've used it but Google cloud shell is a good free platform for learning Linux commands as well
Well... The best days were just putting hardware in a 2U box, racking it, and paying a bit for power and networking. This was such an easier time, and a handful of core 2 duos were fully capable of streaming 1080p video to around a million daus.
Of course, there's far more money in really fancy shared hosting that wastes resources, so that's the current model. Then you market to C-level folks that "real companies" host on AWS or Azure, and that all others options are "unserious." If your opex for compute isn't a million, you're wrong.
heyethan 11 hours ago [-]
Feels like the real value here is zero setup.
Even spinning up a VM can be enough friction for beginners. A browser shell is kind of “good enough” for that.
Probably why tools like this keep sticking around. Wanna try.
It takes a lot of guts to run something like this for years on end, kudos to you for setting this up and running it for all these years. I am wondering if you'd ever come across pubnixes or tilde servers when you first started up webminal?
andai 10 hours ago [-]
This is so fascinating, I've never heard of UML!
How many users can this support simultaneously? It says 256MB RAM per user, 8GB total on server? But it's probably more than 32 simultaneous users?
giis 10 hours ago [-]
In past I have seen around 10 process, but I think with current setup, it could support around upto 20 UML. Remember this runs on the same server where others login and get their normal bash account too. So not a dedicated UML server.
Well that server is worth 1M due to the 8GB RAM now!
user34283 11 hours ago [-]
I wonder how much money went into the hosting over the years.
A year ago I bought a Intel N100 Mini PC with 16 GB DDR5 RAM and a 512 GB SSD for $170.
Maybe it could have hosted the site too. It's certainly a lot faster than Azure VMs with 4 "vCPUs".
PunchyHamster 6 hours ago [-]
2x the cost of power is probably reasonable number to use
user34283 5 hours ago [-]
With the N100, the yearly cost depending on load and electricity price is probably between $20-$60.
lvl155 6 hours ago [-]
You just might break that 10K visit from Spanish tech blog in 2017.
ggandhi 5 hours ago [-]
This is a very inspiring entrepreneurial story. The story of not giving up.
actionfromafar 11 hours ago [-]
User mode linux is so cool.
giis 11 hours ago [-]
Yes, User mode linux pretty cool project. If I'm not wrong, UML is kind of predecessor to gvisor or firecracker from a different era.
kevinbaiv 11 hours ago [-]
This is a good reminder that good enough + zero setup often beats more powerful solutions.
sudo_cowsay 12 hours ago [-]
I've never tried Webminal (only used Linode for it's simplicity). But, it seems great. I'll probably try it out.
giis 12 hours ago [-]
Sure thanks, Let me know if you have feedback.
sudo_cowsay 9 hours ago [-]
I really like the ease of use of the site. It's also very clean. However, when you go into the Linux, there is a bit of latency (very noticeable). I know that it's impossible to remove the latency completely (it is what it is), but is there a way to slightly reduce it?
giis 6 hours ago [-]
There will be little latency if you access from different region. Server located at Singapore. From India, I checked right now directly via this link https://www.webminal.org/terminal/proxy/index/ I dont see much issue. I use firefox/chrome on Debian. May be try with different browser?
sudo_cowsay 9 hours ago [-]
How does it only work on 8gb of RAM if it serves 500k users (albeit not all 500k at once)?
giis 6 hours ago [-]
Only UML is the resource consuming part kept as option available on request. Rest of them all shared Shellinabox, nginx,Flask and each active user session consumes little RAM since its a shared terminal. Simple `ls /home` shows all other users on that server!
znpy 4 hours ago [-]
> User Mode Linux
Oh man, what a blast from the past. I have fond memories of learning linux networking with netkit (based on UML).
UML was a really really cool piece of technology.
If anybody is wondering, User Mode Linux lets you boot a Linux kernel as a normal linux process, and then run an userspace, still in a linux process. This is from 2001. Super cool.
ErroneousBosh 1 hours ago [-]
I was trying to remember what this was called the other day, for some reason.
It turns out that if you run a uml kernel and point its root at the root of the disk the host Linux is running on, there's a hell of a turf war between the two and no-one wins.
gchamonlive 8 hours ago [-]
Would UML be similar to Incus running unprivileged VMs?
tuananh 11 hours ago [-]
iximuiz also give you 1 hour per day free i think.
very easy to use. almost instant.
ramon156 11 hours ago [-]
blegh, the content is interesting but i've grown numb towards AI speak. It's so generic that I lose interest halfway through.
andai 10 hours ago [-]
Yeah, the content itself is amazing but the AI writing detracts from that. I'd much rather read broken English than GPT output.
That being said I really enjoyed reading this, and I'm looking forward to trying it out.
riverforest 4 hours ago [-]
This is the post that should be required reading before anyone spins up a Kubernetes cluster for their side project.
Great work giis.
I haven't used it, I didn't know it existed until now, but I'm happy it exists and has been providing service to those who need it. There should be more of this.
These days the world is amazing. Oracle Cloud gives you a ton for free. But perhaps there's some niche where this is useful. I have to say that this shared screen comms system is outrageously crazy, hahaha.
>shared screen comms system is outrageously crazy,
Thats Freston idea. I remember our typically chat begins with something like "Hey Laks, Can you see me typing!" ;)
somtimes the "wrong" / "old" tool for some job is exactly right for you if you really understand it. UML is old but fits here.
15 years is long enough to call memory about a lot of things.
The ZX Spectrum that followed, with its huge 48K of RAM was night and day. The programs were so much more complicated.
Even echo on linux these days takes 38K of disk space and a baseline of 13K of memory to execute, before whatever is required to hold the message you're repeating.
https://youtu.be/XXBxV6-zamM?t=1694
RAM was so tight on those 8-bit machines that many games used tricks like hiding things inside the viewable area of the screen to eck out just a little bit more.
Hell, the RAM size was so important that they named machines after it.
https://shell.cloud.google.com
Of course, there's far more money in really fancy shared hosting that wastes resources, so that's the current model. Then you market to C-level folks that "real companies" host on AWS or Azure, and that all others options are "unserious." If your opex for compute isn't a million, you're wrong.
Even spinning up a VM can be enough friction for beginners. A browser shell is kind of “good enough” for that.
Probably why tools like this keep sticking around. Wanna try.
[0] https://github.com/sponsors/Lakshmipathi
How many users can this support simultaneously? It says 256MB RAM per user, 8GB total on server? But it's probably more than 32 simultaneous users?
A year ago I bought a Intel N100 Mini PC with 16 GB DDR5 RAM and a 512 GB SSD for $170.
Maybe it could have hosted the site too. It's certainly a lot faster than Azure VMs with 4 "vCPUs".
Oh man, what a blast from the past. I have fond memories of learning linux networking with netkit (based on UML).
UML was a really really cool piece of technology.
If anybody is wondering, User Mode Linux lets you boot a Linux kernel as a normal linux process, and then run an userspace, still in a linux process. This is from 2001. Super cool.
It turns out that if you run a uml kernel and point its root at the root of the disk the host Linux is running on, there's a hell of a turf war between the two and no-one wins.
very easy to use. almost instant.
That being said I really enjoyed reading this, and I'm looking forward to trying it out.