“Normal person” is a modifier of server. It does not state any expectation of every normal person having a server. Instead, it sets expectation that they are talking about servers owned by normal people. I have a server. I am norm… crap.
Yeah, when you’re a technology enthusiast, it’s easy to forget that your average user doesn’t have a home server - perhaps they just have a NAS or two.
(Kidding aside, I wish more people had NAS boxes. It’s pretty disheartening to help someone find old media and they show a giant box of USB sticks and hard drives. In a good day. I do have a USB floppy drive and a DVD drive just in case.)
Hello fellow home labber! I have a home built xpenology box, proxmox server with a dozen vm’s, a hackentosh, and a workstation with 44 cores running linux. Oh, and a usb floppy drive. We are out here.
You… you don’t? Surely there’s some mistake, have you checked down the back of your cupboard? Sometimes they fall down there. Where else do you keep your internet?
Appologies, I’m tired and that made more sense in my head.
TBH, that might be enough. Stuff like SDXL runs on 4G cards (the trick is using ComfyUI, like 5-10s/it), smaller LLMs reportedly too (haven’t tried, not interested). And the reason I’m eyeing a 9070 XT isn’t AI it’s finally upgrading my GPU, still would be a massive fucking boost for AI workloads.
I mean the image generators can be cool and LLMs are great for bouncing ideas off them at 4 AM when everyone else is sleeping. But I can’t imagine paying for AI, don’t want it integrated into most products, or put a lot of effort into hosting a low parameter model that performs way worse than ChatGPT without a paid plan. So you’re exactly right, it’s not being sold to me in a way that I would want to pay for it, or invest in hardware resources to host better models.
AI AI AI AI
Yawn
Wake me up if they figure out how to make this cheap enough to put in a normal person’s server.
I’m pretty sure I speak for the majority of normal people, but we don’t have servers.
“Normal person” is a modifier of server. It does not state any expectation of every normal person having a server. Instead, it sets expectation that they are talking about servers owned by normal people. I have a server. I am norm… crap.
Ikr…Dude thinks we’re restaurants or something.
Yeah, when you’re a technology enthusiast, it’s easy to forget that your average user doesn’t have a home server - perhaps they just have a NAS or two.
(Kidding aside, I wish more people had NAS boxes. It’s pretty disheartening to help someone find old media and they show a giant box of USB sticks and hard drives. In a good day. I do have a USB floppy drive and a DVD drive just in case.)
lol yeah, the lemmy userbase is NOT an accurate sample of the technical aptitude of the general population 😂
Hello fellow home labber! I have a home built xpenology box, proxmox server with a dozen vm’s, a hackentosh, and a workstation with 44 cores running linux. Oh, and a usb floppy drive. We are out here.
I also like long walks in Oblivion.
You… you don’t? Surely there’s some mistake, have you checked down the back of your cupboard? Sometimes they fall down there. Where else do you keep your internet?
Appologies, I’m tired and that made more sense in my head.
Well obviously the internet is kept in a box, and it’s wireless. The elders of the internet let me borrow it occasionally.
Relevant XKCD
You can get a Coral TPU for 40 bucks or so.
You can get an AMD APU with a NN-inference-optimized tile for under 200.
Training can be done with any relatively modern GPU, with varying efficiency and capacity depending on how much you want to spend.
What price point are you trying to hit?
With regards to AI?. None tbh.
With this super fast storage I have other cool ideas but I don’t think I can get enough bandwidth to saturate it.
TBH, that might be enough. Stuff like SDXL runs on 4G cards (the trick is using ComfyUI, like 5-10s/it), smaller LLMs reportedly too (haven’t tried, not interested). And the reason I’m eyeing a 9070 XT isn’t AI it’s finally upgrading my GPU, still would be a massive fucking boost for AI workloads.
You’re willing to pay $none to have hardware ML support for local training and inference?
Well, I’ll just say that you’re gonna get what you pay for.
No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.
Precisely.
I have a hard time believing anybody wants AI. I mean, AI as it is being sold to them right now.
I mean the image generators can be cool and LLMs are great for bouncing ideas off them at 4 AM when everyone else is sleeping. But I can’t imagine paying for AI, don’t want it integrated into most products, or put a lot of effort into hosting a low parameter model that performs way worse than ChatGPT without a paid plan. So you’re exactly right, it’s not being sold to me in a way that I would want to pay for it, or invest in hardware resources to host better models.