Did I just create the world's smallest AI server?
I successfully installed Ollama in Termux on my degoogled Unihertz Jelly Star, reputed to be the world's smallest smartphone, having a 3 inch screen. The Jelly Star packs 8+7 gb ram. I downloaded and then successfully ran distilled Deepseek-R1:7b locally on the device. Is it slow? Yes. But it still steadily outputs text word by word, does not crash, and takes no longer than a couple mins to respond in full. Anyone have other examples of micro AI workstations?
>Unihertz
Man I had the Titan Pocket. I got it and loved the physical keyboard. What a disappointment though. The wifi would just cut out, and no solutions available.
I'm running Ollama + Smollm on this thing: https://pine64.org/devices/quartz64_model_b/
Very cool to learn about that device. I want to see how small I can get. The main surprise for me was being able to run the 7B parameters version
Nvidia's Orin Nano Super is pretty recent and a very neat little piece of kit for the price: https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...
Basically a 25 watt Raspberry Pi with an integrated CUDA-capable GPU. Haven't got one myself, but this probably represents the greatest size/performance inflection point for consumers right now.
[dead]