08:43:29 @chocologic@madost.one
icon

타이칸 사고싶다..

10:00:50 @chocologic@madost.one
icon

스스로 불러온~ 재앙에 짓눌려~

10:08:18 @chocologic@madost.one
icon

I know it's been a while but I'm glad you came

10:14:00 @chocologic@madost.one
icon

E3도 그렇고 CES도 나오는 거 보면 테크 전시회라는거 자체가 전체적으로 좀 irrelevant해지는 중인듯..

12:00:55 @chocologic@madost.one
icon

요즘 https://12ft.io 는 거의 다 막혔던데 https://archive.today 쓰면 페이월 잘뚫리더라

12:01:09 @chocologic@madost.one
icon

어떻게뚫는걸까 진짜궁금함

12:22:47 @chocologic@madost.one
icon

pretty cool, if real

12:24:12 @chocologic@madost.one
icon

this is what i would have expected from other "ai products" tbh

12:34:55 @chocologic@madost.one
icon

if the tech is real, i think they would have a much better time integrating that "Large Action Model" with iOS UIKit or whatever the equivalent is called on Android

i don't see any reason for it to be a separate device - in fact,
where are they running that "action model" on? i definitely don't want them to just take my auth tokens and send it to some cloud for the model to execute - that should run on-device
and if they
are taking the auth tokens, there's no point in using the model anyways, since you can just manually integrate APIs anyways

12:40:46 @chocologic@madost.one
icon

it seems like the key here is that the AI model can understand the semantics of UIs on displays so that it can do anything that a human with a computer (or phone) can do, so that no manual interfacing of APIs are necessary - which is honestly cool (and probably is the future), but if this is the case, why do i need a dedicated hardware for it instead of some windows shell plugin or something? and i really doubt that an $199 device can run this model on-device :/