Readit News logoReadit News
ironborn123 commented on Tesla Optimus Folds a Shirt [video]   twitter.com/elonmusk/stat... · Posted by u/modeless
modeless · 2 years ago
I am guessing that this is teleoperated, not autonomous. If it can do this autonomously that would be extremely impressive. Of course, even without autonomy it's still impressive as a demo of hardware capabilities and of the fidelity of their teleoperation system. And with data like this fed into modern ML it's only a matter of time before they can train a system to do this.

Edit: teleoperation confirmed https://x.com/elonmusk/status/1746970616060580326

ironborn123 · 2 years ago
Even a teleoperated version can command a huge market. Think millions of robo butlers operated by gig workers from low income countries.
ironborn123 commented on Generative AI could make search harder to trust   wired.com/story/fast-forw... · Posted by u/jedwhite
ironborn123 · 2 years ago
Wasnt there a paper a few months back, Textbooks are all you need. yes found it https://arxiv.org/abs/2306.11644

So search engines in their traditional sense will be obsolete anyway.

1) GPT-4 and other such LLMs will generate textbooks and manuals for every conceivable topic.

2) These textbooks will be 'dehallucinated' and curated by known experts on particular topics, who have reputations to maintain. The experts' names will be advertised by the LLM provider.

3) People will search for stuff by chatting with the LLMs, which will in turn provide citations for the chat output from the curated textbooks.

ironborn123 commented on The molecule DIM reduces biofilms causing dental plaque: study   scitechdaily.com/90-reduc... · Posted by u/hanniabu
ironborn123 · 2 years ago
So any salad that contains chopped cabbage/broccoli and eaten daily should do the trick?
ironborn123 commented on Mathematical proof is a social compact   quantamagazine.org/why-ma... · Posted by u/digital55
ironborn123 · 2 years ago
There are weaker formal systems like Presburger arithmetic (peano without multiplication) and Skolem arithmetic (peano without addition) that have been proven to be complete and consistent. Tarski also showed that there are formal systems for real numbers (hence also geometry) which have the same properties. (although the real numbers include integers, the integers alone have a lot more structure and so Tarski's result does not imply Peano)

There are also extensions to these (eg. presburger extended to multiplication by constants) that are also known to be complete and consistent.

These systems do not require any social compact. Any theorems proven through them are absolute truth, although the the range of statements that these systems can express is limited.

One may require a social compact for Peano, ZFC, and such other powerful formal systems.

That the software implementations like Coq and Lean are bug free may also require a social compact, if the nature of being bug free cannot be formally proved, although it seems determining this should be an easier problem.

ironborn123 commented on Lie still in bed   ognjen.io/lie-still-in-be... · Posted by u/rognjen
ironborn123 · 2 years ago
While willpower may work for some people, what actually works for me (and i believe the majority of people) is self-deception or distraction.

To sleep, use white noise/rhythmic music/soothing voice

To climb a mountain, tell yourself that my next goal is to just reach that particular rock about 100 metres higher

In the gym, make a friend and chat and joke with them while doing your exercises

While sprinting, divide 32 by 13 to many decimal places, as Joey from Friends once suggested.

ironborn123 commented on AI isn’t good enough   skventures.substack.com/p... · Posted by u/MaysonL
ironborn123 · 2 years ago
Rather than asserting that current LLMs are at their tail end, or that AI isnt good enough, it is much more instructive to check what are the bottlenecks or constraints to further progress, and what would help remove these bottlenecks.

They can largely be divided into 3 buckets

1) Compute constraint - Currently large companies using expensive nvidia chips do most of the heavylifting of training good models. Although chips will improve over time, and competition like Intel/AMD will bring down prices, this is a slow process. But what could be a faster breakthrough is training using distributed computing over millions of consumer GPUs. There are already efforts in that direction (eg. petals/swarm parallelism for finetuning/full training, but the eastern europe/russian guys developing them dont seem to have enough resources).

2) Data constraint - If you just rely on human generated text data, you will soon exhaust this resource (maybe GPT4 has already). But the Tinystories dataset generated from GPT4 shows if we can have SOTA models generate more data (and especially on niche topics that appear less frequently in human generated data), and have deterministic/AI filters to segregate the good and bad quality data thus generated, data quantity would not be an issue any longer. Also, multimodal data is expected (with the right model architectures) to be more efficient at training world grokking SOTA models than single modal data and here we have massive amounts of online video data to tap into.

3) Architectural knowledge constraint - This may be the most difficult of all, figuring out what is the next big scalable architecture after Transformers. Either we keep trying newer ideas (like the stanford hazy research group does), and hope something sticks, or we get SOTA models few years down the line to do this ideation part for us.

ironborn123 commented on Stablevideo: Text-driven consistency-aware diffusion video editing   rese1f.github.io/StableVi... · Posted by u/satvikpendem
runeks · 2 years ago
That's what everyone seems to suppose. I don't see why it's warranted, though.
ironborn123 · 2 years ago
one (quite convincing) theory is that anything that can be achieved by a carbon-based neural network (eg. human brain) can also be achieved by a silicon-based neural network. The hardware may change, but the hardware's software expressiveness shouldnt be affected, unless there is a fundamental chemistry constraint.

Since human brains during dreams (lucid or otherwise) can generate coherent scenes, and transform individual elements in a scene, diffusion based models running on cpu/gpus should eventually be able to do the same.

ironborn123 commented on China’s property giant Evergrande files for bankruptcy protection in Manhattan   cnbc.com/2023/08/18/china... · Posted by u/donsupreme
ironborn123 · 2 years ago
How can a declining population selling mostly commoditized goods support ever increasing property prices, that too when property is already overleveraged!!

This crash was always on the cards. Just a matter of when, and the when may have finally arrived.

ironborn123 commented on Fusion Foolery   dothemath.ucsd.edu/2023/0... · Posted by u/rohansingh
ironborn123 · 2 years ago
I get the feeling the article preaches to the choir.

The serious sources have always portrayed NIF's work as technical achievements. But they are read mostly by scientist and engineer types.

Mass media which hypes things is read, well by the masses, who dont have the patience or inclination to delve into technical details.

This dichotomy will always exist. I remember once reading a Chekov story where two intellectuals discuss how the townspeople are more interested in silly affairs and scandals rather than recognizing intellectual achievements.

Dead Comment

u/ironborn123

KarmaCake day65July 31, 2023View Original