Telegram Group & Telegram Channel
تویت جالب اندرو کارپاسی درمورد

LLMs and code generation


https://x.com/karpathy/status/1930305209747812559




You could see it as there being two modes in creation. Borrowing GAN terminology:
1) generation and
2) discrimination.
e.g. painting - you make a brush stroke (1) and then you look for a while to see if you improved the painting (2). these two stages are interspersed in pretty much all creative work.

Second point. Discrimination can be computationally very hard.
- images are by far the easiest. e.g. image generator teams can create giant grids of results to decide if one image is better than the other. thank you to the giant GPU in your brain built for processing images very fast.
- text is much harder. it is skimmable, but you have to read, it is semantic, discrete and precise so you also have to reason (esp in e.g. code).
- audio is maybe even harder still imo, because it force a time axis so it's not even skimmable. you're forced to spend serial compute and can't parallelize it at all.

You could say that in coding LLMs have collapsed (1) to ~instant, but have done very little to address (2). A person still has to stare at the results and discriminate if they are good. This is my major criticism of LLM coding in that they casually spit out *way* too much code per query at arbitrary complexity, pretending there is no stage 2. Getting that much code is bad and scary. Instead, the LLM has to actively work with you to break down problems into little incremental steps, each more easily verifiable. It has to anticipate the computational work of (2) and reduce it as much as possible. It has to really care.

This leads me to probably the biggest misunderstanding non-coders have about coding. They think that coding is about writing the code (1). It's not. It's about staring at the code (2). Loading it all into your working memory. Pacing back and forth. Thinking through all the edge cases. If you catch me at a random point while I'm "programming", I'm probably just staring at the screen and, if interrupted, really mad because it is so computationally strenuous. If we only get much faster 1, but we don't also reduce 2 (which is most of the time!), then clearly the overall speed of coding won't improve (see Amdahl's law).

@css_nlp
👍43🔥1



group-telegram.com/css_nlp/105
Create:
Last Update:

تویت جالب اندرو کارپاسی درمورد

LLMs and code generation


https://x.com/karpathy/status/1930305209747812559




You could see it as there being two modes in creation. Borrowing GAN terminology:
1) generation and
2) discrimination.
e.g. painting - you make a brush stroke (1) and then you look for a while to see if you improved the painting (2). these two stages are interspersed in pretty much all creative work.

Second point. Discrimination can be computationally very hard.
- images are by far the easiest. e.g. image generator teams can create giant grids of results to decide if one image is better than the other. thank you to the giant GPU in your brain built for processing images very fast.
- text is much harder. it is skimmable, but you have to read, it is semantic, discrete and precise so you also have to reason (esp in e.g. code).
- audio is maybe even harder still imo, because it force a time axis so it's not even skimmable. you're forced to spend serial compute and can't parallelize it at all.

You could say that in coding LLMs have collapsed (1) to ~instant, but have done very little to address (2). A person still has to stare at the results and discriminate if they are good. This is my major criticism of LLM coding in that they casually spit out *way* too much code per query at arbitrary complexity, pretending there is no stage 2. Getting that much code is bad and scary. Instead, the LLM has to actively work with you to break down problems into little incremental steps, each more easily verifiable. It has to anticipate the computational work of (2) and reduce it as much as possible. It has to really care.

This leads me to probably the biggest misunderstanding non-coders have about coding. They think that coding is about writing the code (1). It's not. It's about staring at the code (2). Loading it all into your working memory. Pacing back and forth. Thinking through all the edge cases. If you catch me at a random point while I'm "programming", I'm probably just staring at the screen and, if interrupted, really mad because it is so computationally strenuous. If we only get much faster 1, but we don't also reduce 2 (which is most of the time!), then clearly the overall speed of coding won't improve (see Amdahl's law).

@css_nlp

BY Ali's Notes


Warning: Undefined variable $i in /var/www/group-telegram/post.php on line 260

Share with your friend now:
group-telegram.com/css_nlp/105

View MORE
Open in Telegram


Telegram | DID YOU KNOW?

Date: |

There was another possible development: Reuters also reported that Ukraine said that Belarus could soon join the invasion of Ukraine. However, the AFP, citing a Pentagon official, said the U.S. hasn’t yet seen evidence that Belarusian troops are in Ukraine. Channels are not fully encrypted, end-to-end. All communications on a Telegram channel can be seen by anyone on the channel and are also visible to Telegram. Telegram may be asked by a government to hand over the communications from a channel. Telegram has a history of standing up to Russian government requests for data, but how comfortable you are relying on that history to predict future behavior is up to you. Because Telegram has this data, it may also be stolen by hackers or leaked by an internal employee. The news also helped traders look past another report showing decades-high inflation and shake off some of the volatility from recent sessions. The Bureau of Labor Statistics' February Consumer Price Index (CPI) this week showed another surge in prices even before Russia escalated its attacks in Ukraine. The headline CPI — soaring 7.9% over last year — underscored the sticky inflationary pressures reverberating across the U.S. economy, with everything from groceries to rents and airline fares getting more expensive for everyday consumers. Following this, Sebi, in an order passed in January 2022, established that the administrators of a Telegram channel having a large subscriber base enticed the subscribers to act upon recommendations that were circulated by those administrators on the channel, leading to significant price and volume impact in various scrips. Pavel Durov, Telegram's CEO, is known as "the Russian Mark Zuckerberg," for co-founding VKontakte, which is Russian for "in touch," a Facebook imitator that became the country's most popular social networking site.
from us


Telegram Ali's Notes
FROM American