타이칸 사고싶다..
"like.no.other"
Pseudosoftware Engineer
Rust Evangelism Strike Force
Head admin of madost.one
팔로 편하게 거세요~
FMOT: https://www.threads.net/@chocologic00
#fff1cc
Disclaimer: all opinions are of my own and I do not represent my employer
E3도 그렇고 CES도 나오는 거 보면 테크 전시회라는거 자체가 전체적으로 좀 irrelevant해지는 중인듯..
이번 CES도 아이스크림 기계 빼고는 전부 별볼일없는것같아
https://www.cnet.com/home/kitchen-and-household/coldsnap-can-make-homemade-ice-cream-in-under-two-minutes-and-its-inching-closer-to-your-kitchen/
this is what i would have expected from other "ai products" tbh
if the tech is real, i think they would have a much better time integrating that "Large Action Model" with iOS UIKit or whatever the equivalent is called on Android
i don't see any reason for it to be a separate device - in fact, where are they running that "action model" on? i definitely don't want them to just take my auth tokens and send it to some cloud for the model to execute - that should run on-device
and if they are taking the auth tokens, there's no point in using the model anyways, since you can just manually integrate APIs anyways
it seems like the key here is that the AI model can understand the semantics of UIs on displays so that it can do anything that a human with a computer (or phone) can do, so that no manual interfacing of APIs are necessary - which is honestly cool (and probably is the future), but if this is the case, why do i need a dedicated hardware for it instead of some windows shell plugin or something? and i really doubt that an $199 device can run this model on-device :/