Readit News logoReadit News
cgearhart · 3 years ago
From my perspective it’s just _confusing_ to work in AI right now. We have some massive models that are doing some really neat stuff, and apparently hundreds of millions of people are using them—but I keep wondering: to do _what_, exactly? I’m not asking what the models can do, I’m asking what people want the models to do every day, all the time.

I’ve been shown some neat pictures people made that they thought were cool. I don’t know that I need this every day.

I’ve seen examples of “write an email to my boss”. It would take me longer to explain to ChatGPT what I want than to write these myself.

I’ve seen “write a snippet of code” demos. But I hardly care about this compared to designing a good API; or designing software that is testable, extensible, maintainable, and follows reasonable design principles.

In fact, no one in my extended sphere of friends and family has asked me anything about chatGPT, midjourney, or any of these other models. The only people I hear about these models from are other tech people.

I can see that these models are significantly better than anything before, but I can’t see yet the “killer app”. (For comparison, I don’t remember anyone in my orbit predicting search or social networking being killer apps for the internet—but we all expected things like TV and retail sales to book online.)

What am I missing?

brookst · 3 years ago
You’re asking “what’s so big about GUIs? Literally nobody has asked to move a pointer around a screen”.

It’s the use cases these thing enable that are important.

Today, I wrote a draft product announcement. Only after I was done did I realize I had written it in a really impersonal third person (“users will be able to”). No big deal, but maybe 10-20 minutes of work to make it energetic and second person (“now you can…”).

30 seconds with chatgpt. “Rewrite with more energy, in the second person, using best practices for announcements”).

Six months ago I would never have asked for that. Today it was glorious and let me move on to focus on more important things.

nwienert · 3 years ago
I mean Google revolutionized search. Apple revolutionized personal computing.

OpenAI revolutionized… rewriting things with slightly different wording?

I’ve seen so many breathless people posting “this would have taken me so long to search” and then I type 3 keywords from their massive prompt they crafted and find it instantly on Google. We’re talking 1000x or more faster. I feel like the same is happening in your comment. How often have I thought “damn I wish I wrote this blog post ever so slightly differently” in my life? Maybe a handful of times? And yes I’m including all generalizations of that question.

But certainly fake girlfriends and summarization will be mid size fields. Image generation has some mid size potential. But these will be spread between many companies.

I really think it has uses no doubt, but is it a revolution? Where? It’s not creative in the valuable sense - media, art, fashion, etc all will adopt it marginally but ultimately it will actually only serve to further the desire for genuine human experience, and cohesive creativity that we see it really falls flat at. It saves some marginal time perhaps if you’re ok sounding like a robot.

Taking into account the downsides it looks like a hype bubble right now to me, and a draw in the long run. There’s just a whole lot of tech people trying to cash in on the hype.

onion2k · 3 years ago
That's very cool, and right now it's a good idea, but I strongly suspect GPT only looks clever and does a good job in isolation. If everyone starts using it product announcements will start looking very similar, and they'll lose a lot of their impact.

This is definitely the case with cover letters for jib applications. The ones written by GPT appear to be pretty obvious - my guesses could be wrong, but after seeing most applications not having a cover letter for years to most applications having one over the past few months, I suspect GPT is involved, and there's a distinct 'style' that seems to be showing up.

Using GPT could be the 'bootstrap ui' of product announcements. It looks great on its own, but put it next to a bunch of other companies and they all fail to stand out.

cgearhart · 3 years ago
Hmm…no, I’m not doubting the value of the interface. I’m asking what’s the “killer app”.

Maybe I should just ask ChatGPT to explain it to me…

tuatoru · 3 years ago
Most of the economy involves moving around physical things. Construction, transport, nursing and related health occupations (physiotherapy, home aid, etc.), retail and wholesale logistics. Manufacturing employs another chunk. Services for agriculture, fishing, mining also employ more people than are directly employed in those industries and are mostly to do with machines and equipment. Utilities.

Most of the rest is high-touch. People want interactions with humans for important stuff, not with AI. Remote teaching was an unmitigated disaster for most school students: how will AI teachers do, do you think? Attempts at robot police and security guards haven't gone down very well to date. It'll be a while before there are AI EMTs and firemen.

So there are grounds for skepticism.

avidphantasm · 3 years ago
I guess if you don’t like writing, this is good for you. However, I like writing. Moreover, many people are lousy at it (not that I couldn’t improve). I’m not sure I want something trained on lots of mediocre writing doing editing for me.
donkeyd · 3 years ago
> What am I missing?

I'm going to guess (assume) you probably haven't worked in a 'real' business. A place where elbow grease still does the majority of the work and where Windows 8 was only just phased out.

The killer app (to me) in case of GPT is GPT itself, not ChatGPT. ChatGPT just allows me to easily test use cases for GPT. There are many interesting use cases for those elbow grease businesses for GPT. For example:

Data entry. There's still a lot of data entry being done from unstructured text. Where specifics like names and addresses need to be extracted from letters and emails and contracts and other stuff. I've worked on these challenges before using different strategies and GPT blows my mind with what it can do just by asking to grab this data and format it as JSON. Is it 100% correct? Nope. You still need people to review the data (depending on the use case), but that already saves tons of work.

Categorization. Some companies still get tons of emails that need to be forwarded to specific departments. This is another thing that GPT does surprisingly well out of the box.

And that's just GPT. There are many other legacy business processes that can be automated (partially) by other models that are coming out right now. Even just 'segment anything' that Meta just released is incredibly useful for many use cases I've seen in my daily work.

Killer apps are always a combination of a technology to solve a real world problem. If you don't venture into the real world and only stay part of the tech world, seeing the killer app is very difficult and ends up leading to Juicero-like products.

version_five · 3 years ago
Right, questioning the value of a chatbot is a good indication someone hasn't had real job. Totally legit characterization that doesn't make the whole thing sound like a confidence game.
stepbeek · 3 years ago
Data entry is a really interesting one to me. We’ve been replacing a legacy (read > 1 MLoC with no tests), system on and off for a few years. The original system had a ton of double entry or manual data entry and the human error rate is noticeable. If GPT could have automated this with a similar or reduced error rate then we would have considered it a win.

The real win long term remains killing off manual entry any time it’s possible, but GPT offers a nice patch.

naavis · 3 years ago
> There's still a lot of data entry being done from unstructured text. Where specifics like names and addresses need to be extracted from letters and emails and contracts and other stuff. I've worked on these challenges before using different strategies and GPT blows my mind with what it can do just by asking to grab this data and format it as JSON.

Is this data confidential or something you are willing to send to anyone? If the former, you probably shouldn't be sending it to an AI company that retains the data for its own purposes.

throwaway2037 · 3 years ago
Nice call about "Categorization". I would put a lot of "Data Entry" in the same category. Excellent ideas!
zirgs · 3 years ago
Latvian is a language, that's spoken by less than 2 million people. It's irrelevant outside of Latvia.

All of our government funded researchers who worked on natural language processing can now throw their work in the trash and resign. ChatGPT is leagues better than anything that they've done. And OpenAI weren't even trying.

Windows wasn't even localised in Latvian properly until very recently. Google translate still spits out ridiculous translations. (Though it's better than before). Most software isn't even available in Latvian. Almost no video games are in Latvian. Only the most popular books and movies are being translated. Interested in something less popular - you better learn other languages.

And now ChatGPT comes out and I can ask it to write C++ functions in Latvian. I don't need to learn English to be a programmer any more. Nothing like that has been done before. It translates stuff way better than google. And it will only get better.

Imagine that there's a book that only 50 people from Latvia are interested in. Human translators aren't going to bother. But ChatGPT can do it easily.

This is a very big deal.

jimsimmons · 3 years ago
Why is Latvian NLP research in trash?

Real researchers build on advances not quit because of them. I'm sure GPT is not as optimised as it could be to process non-English text. There's clearly a lot of work to do. At least this is true for south asian languages and I'm sure is true even for popular Western languages like French or German

int_19h · 3 years ago
This also works in reverse, BTW - think of all the obscure books, movies etc that were never translated because it wasn't worth the effort.
wrp · 3 years ago
Quality machine translation for less common languages would be a service worth paying for. Any links to examples and evaluation of ChatGPT capabilities?
version_five · 3 years ago
It's a hype bubble. There's a small group if die-hards in the tech-adjacent VC community that talk it up to no end and attack anyone who disagrees (I just saw a guy ask you if you were "virtue signaling" with your post - lol. And there's lots of people exploring how it could potentially be commercially useful - just like with blockchain. As everyone points out, it's got more going for it than blockchain did because it does something more tangible.

But the jury's still out on the long-run commercial potential of pretty good autocomplete or chat-as-search. It's probably more than zero (like blockchain), but it won't "change everything".

I'd guess that 90% people either agree with your observation or don't notice / care. It's just that these things bring out the vocal defenders, usually a sign people know deep down it's a bit of BS and post out of insecurity

bavila · 3 years ago
I've got access to GPT-4 at work. There have been many times that I'll encounter a bug while programming, I'll paste my code to GPT, tell it what my code is currently doing, and tell it what I actually want the code to do. It then explains what I'm doing wrong and what I can do to fix it. I've had success with this over 90% of the time. This saves me a significant amount of time that would have otherwise been spent hunting down solutions on Google, Stack Overflow, GitHub issues, etc.

I don't know what else to say other than that I would not willingly go back to life without GPT. The value speaks for itself to me.

petesergeant · 3 years ago
I think you're wrong. I never bought into the blockchain hype, and blockchain added literally nothing to my life other than some speculation right at the beginning. I would pay $200 a month for ChatGPT, today, without thinking twice about it. That alone makes me think it's a much, much bigger deal than you think it is.
epigramx · 3 years ago
> it does something more tangible

it does something more superficially apparent to naive people. decentralization is extremely more important. it will be one of the main ingredients of direct democracy.

Dead Comment

jack_pp · 3 years ago
I thought about learning Go and Fiber to build the backend of a side project of mine and I did but as it goes for any new language / stack I wasn't feeling confident in it. Then ChatGPT came out and I thought what the hell, let's see what all the fuss is about.

So I asked it to write me a struct for a table with the "id, name, longitude, latitude, news" columns. That worked well, I was surprised it automatically inferred the data types for said columns.

Then I asked it to write endpoints for retrieving a record from that table and it did so perfectly which again I was surprised by. Asked it to add endpoints for adding records and retrieving all records. Again, no bugs, perfect code. At the end I asked it to create a python script to test the API and it did so flawlessly.

Next day I created a docker env with postgres and went to test the code but it didn't work, turns out it wrote it with mysql in mind so went back and told it to rewrite the entire code with postgres in mind and again it did so flawlessly so overall writing this small API endpoint took maybe 30-60 min.

Considering I was a total newbie at Go this probably would have taken me several hours to complete successfully and this code is basically just boilerplate. I don't care to learn it by heart so I can be more productive in the future. Now that I have ChatGPT I basically don't have to, I don't have to write python to speed up my dev time, I can just have ChatGPT write the basic stuff in a highly performant language. It removed the only drawback which was more boilerplate.

riffraff · 3 years ago
> Again, no bugs, perfect code

How do you know? Either you know enough Go to be able to tell, and thus you'd be able to write this yourself, or you don't and thus you can't really judge.

I mean, it probably is right, but this "I don't know something, but I trust what the chatbot told me" is what worries me about the rise of the LLMs.

xpe · 3 years ago
Stop looking for the “killer app”; that metaphor is not useful here. Language models don’t need to have one, two, or a hundred “killer apps”. They are likely going to be highly distributed. They can revolutionize every textual interaction point with any person, group, or organization. For worse and better.

Instead, pay attention to how industries and people redefine themselves. There are going to be winners and losers.

nazgul17 · 3 years ago
I found ChatGPT helpful in a bunch of diverse situations. I'll go into detail below, but overall I think the most valuable thing it can do is understand questions that takes both knowledge (that I don't have and want to acquire) and understanding (that prior AI technologies did not have) and provide meaningful answers. I suppose I find it very useful for my own education - with the understanding that I should use 2 or 3 grains of salt when reading its answers.

For example, I am studying Japanese and often encounter expressions that seems to mean the same thing; I would ask my teacher, but her time is limited. I can instead ask ChatGPT and only bring to my teacher the questions whose ChatGPT answers did not convince me.

Another example: I like understanding why there are certain steps in a recipe. It would be hard to find someone with the knowledge and time to answer the question, never mind I should pay for their time. ChatGPT can explain what I want to know at the level of detail that I want.

I was also able to get a decent understanding of a mathematical question I had no business understanding by recursively asking questions until I was able to link its answers to my own knowledge.

It was also able to answer questions about the Spring framework that I had while reading the documentation itself. In that context, going in rabbit holes severely slows down learning and has the potential to just get me lost.

glandium · 3 years ago
Be wary with ChatGPT and Japanese. As someone fluent enough and using ChatGPT for inspiration in writing, as well as to find words that either I have "on the tip of my tongue" but fail to remember, or find idiomatic expressions that I may or may not have heard of, ChatGPT can come up with weird hallucinations (Edit: to the point of making up words, I guess mainly because its tokens are at best at the character level ; knowing that, it's amazing it performs as well as it does, honestly).

I always double check with Google searches, but at least ChatGPT gets me somewhere where I can actually search for something useful.

Relatedly, sometimes, even with an initial prompt I've used in the past to make it do what I need to a text in Japanese, and despite everything I type being in Japanese, sometimes it decides to respond entirely in English.

It can also be really bad with context. Because in Japanese, the subject is often omitted because it's known from context, ChatGPT often mixes things up when rewording or summarizing.

worrycue · 3 years ago
A lot of your examples you can just google search for the answer - and at least google won’t hallucinate; worst case the websites are wrong but GPT’s training data can be wrong too.
csomar · 3 years ago
You are not missing anything. It's a bubble. This is tech 101 and all over again. You gotta hype some new thing whether it's Web 2.0, Crypto or AI.

LLM have proven to be great as a gimmick or making rather okay localized approximations. You can create a good enough image without any designing skills, or have auto-completion on steroids. However, there is no proof that this same tech can extrapolate to the next level.

Most startups are putting their eggs on this single basket. I have the feeling that this will be what triggers another AI winter and some VCs will holding the bags... but why do I care?

spaceman_2020 · 3 years ago
You couldn’t be further from the truth.

This is the single greatest leap in productivity we’ve had in the last 100 years.

Last week, I used GPT-4 to write my code. Later, I used it to analyze 100+ websites and come up with a personalized pitch for a relevant plugin/product idea - something that would have taken me 100+ hours.

zarzavat · 3 years ago
I believe that you are half right. AI is going to radically change the world, but not now, maybe in 10 years. The pace of research is slower than the expectations of investors. This will lead to the perception of a boom and bust and AI winter, but in reality there will be slow steady improvements. It’s just been commercialized too soon.
greeny7373 · 3 years ago
All current indications says otherwise.

Web3 or crypto didn't disrupt markets.

Chatgpt or let's say ml already did and there is no writing on the wall it will stop. Contrary it shows how much potential there is and how excited a lot of people are.

We have already ml now in office, in bing, in Google workspace. There is midjourney, SD etc.

You can find ml generated porn pictures.

We have constantly news about advantages in multiple ml fields.

This is so different to crypto and stuff.

unit_circle · 3 years ago
- Amazing translations (this alone is a game changer)

- Language learning... The ability to improvise realistic conversations is huge. I can ask to talk about cooking a specific dish or a sport!

- As others have noted, refining documents similar to grammarly

- Looking for a product with extremely specific features (though it isn't very good at comparing yet)

- Searches that are too vague and complicated to articulate to a search engine or use exact matching ...

spaceman_2020 · 3 years ago
I don’t think people dismissing AI have access to GPT-4 yet. GPT-3.5 did feel like a gimmick.

GPT-4 feels like a team of really capable human interns.

The potential really goes wild once you connect it to the internet and use stuff like autoGPT

riffraff · 3 years ago
Didn't we have amazing translations for a long time?

I do not notice a significant improvement between new models and last year's deepl/Google translate.

skybrian · 3 years ago
For translations, I use Google translate. Should I switch?
Fr0styMatt88 · 3 years ago
I’ll give you a concrete example from a few days ago in my job.

I needed a quick utility window in the Unity editor to see what animations could fire animation events and what those were.

I’m somewhat familiar with the editor API, enough to know what to Google and roughly where to go in the docs. I don’t do it enough though to really learn it beyond that point. So I’d estimate I could spend maybe one and a half hours, counting research, coding something, testing it and then context-switching back to what I was working on.

On a whim I asked ChatGPT (GPT-4) if it could do it for me. Formulating the prompt took a few minutes. I included a short bullet list of what I wanted and told it what Unity version I was on.

In almost an instant, it did it. I copied the code into a new file, added it to my project and it worked.

Time from idea to the first working version was around 10 minutes.

I asked for some minor refinement and then asked how I could extend it. It gave me starting points and taught me something new about Unity. All that slow doc searching, Google searching and forum-trawling was gone.

It’s like having my own personal dev assistant.

girvo · 3 years ago
One worry I have for the long term is: how will we learn or adopt new programming languages in the future, when LLM du-jour knows nothing about Language X? Will we be stuck with what we’ve got because LLMs make us too productive in them?
CGamesPlay · 3 years ago
> I’ve seen examples of “write an email to my boss”. It would take me longer to explain to ChatGPT what I want than to write these myself.

You can integrate ChatGPT here to help with the proofreading and editing. If you have a list of points, you have have ChatGPT write an email, then integrate its changes. This is useful especially if English is not your first language. ([append]) Here's a quick example. These emails aren't great, but might be better than what I can come up with myself in 2 minutes. https://pastebin.com/dD22gR4y

> I’ve seen “write a snippet of code” demos. But I hardly care about this compared to designing a good API; or designing software that is testable, extensible, maintainable, and follows reasonable design principles.

It's super helpful when starting in a new space. I needed to write a python data munging script the other day. Using a few ChatGPT queries I found dependencies and understood the basics of using them. I still had to check the docs, but I jumped past the "tutorial" and straight to "API reference".

> In fact, no one in my extended sphere of friends and family has asked me anything about chatGPT, midjourney, or any of these other models. The only people I hear about these models from are other tech people.

Counter-anecdote, I met two copywriters working for media publications (non-tech) who both have had GPT-based services integrated into their workflows by the company management.

AussieWog93 · 3 years ago
>I’ve seen “write a snippet of code” demos.

I've used this part of ChatGPT before. Incredibly useful for getting the syntax of some library that you're going to use once in your life, then never again.

Had a sysadmin mate do something similar to generate a simple Chrome plugin for internal use at his work.

That alone justifies the price for ChatGPT Plus, IMO.

danenania · 3 years ago
Agreed. It’s also great for things like “zip up all the files in this folder, upload the zip file to s3, and output a progress bar while uploading”. It’s not difficult in an architectural or algorithmic sense, but it takes time to refresh on the APIs, find a progress bar library, and how do presigned s3 posts work again? And you can make little mistakes, forget error handling, etc. You could easily spend an hour or more on something like this, especially if you haven’t done it before in x language.

With ChatGPT it takes a few minutes. It can add up quite dramatically when you have a bunch of these kinds of tasks on the todo list. It does feel revolutionary to me as a productivity enhancer.

flir · 3 years ago
I used it to knock up a browser extension tonight. My knowledge of JS is basically non-existent. It definitely paid for itself this month.
achenatx · 3 years ago
its good for so many things:

Need to write a speech as a best man

Write a eulogy for a deceased family member

Recently used it to speed up the time to fix an oauth problem in code I didnt write, knowing nothing about oauth. Replaces stackexchange with much more tailored answers.

can take hundreds of pages of text and distill it down to an executive summary of any length

Daughter uses it to explain how to solve algebra problems, not just give an answer. Will completely change education.

Marketing using it for all content. Devs hate writing blog posts, it will take some code as an input and write a blog post about what it does. Can use it to come up with questions to seed a podcast

Paralegals are virtually redundant. AI with all legal caselaw making it much easier

HR using it to write customized offer letters, review resumes, etc.

It isnt just about asking a question and getting an answer. You can keep adding context and the answers keep getting better.

otabdeveloper4 · 3 years ago
> Need to write a speech as a best man

> Write a eulogy for a deceased family member

Yeah, if you're okay with sounding like marketing copy written by a robot.

Remember last month when Stable Diffusion was supposed to put Disney out of business? Not gonna happen.

atq2119 · 3 years ago
> can take hundreds of pages of text and distill it down to an executive summary of any length

This is the kind of breathless claim that no doubt fuels the skeptics.

None of the context windows are large enough for "hundreds of pages" nor "executive summaries of any length".

I did believe that it's possible to make LLMs do that kind of task with significant engineering effort to make it do summaries iteratively somehow, essentially "compressing" parts of the document recursively. But it's not something that you can just give to ChatGPT and have it work.

So yes, the hype is real, in both ways: there's lots of potential to explore over the next years, but also a lot of the claims you read today are not sufficiently hedged, which makes them look outlandish.

Deleted Comment

mayagerard0 · 3 years ago
Unrelated to the general question of usefulness, but I'd be careful using it as a mathematics tutor. Maybe the algebra in this case is sufficiently basic as to be trivial for chatgpt, but I've found that quite often it gets confused by relatively simple mathematics in very baffling ways (One particular example I found amusing is it trying to use the pigeonhole principle with exactly N pigeons and N pigeonholes). There's also the problem that, being a complete pushover, it's hard (specially for a learner) to pinpoint whether the error is in the source or in the understanding of the reader. A teacher may, upon the request of a student to clarify what it seems like a mistake, say "No, this may seem wrong but it's correct for so and so reasons", but chatgpt usually goes "You're right, that was a mistake" and gets itself into messier and messier reasoning.

Dead Comment

burlesona · 3 years ago
I don’t think you’re missing anything. I like the “calculator for words” analogy that was posted on HN a few days ago. It doesn’t seem like a revolutionary product, but it does seem like a fundamental innovation which will then unlock many more complex things in subtle ways. Calculators were arguably the predecessor to computers, which are kind of a big deal :)
JBiserkov · 3 years ago
> Calculators were arguably the predecessor to computers, which are kind of a big deal :)

"A computer is a machine that moves data around and only occasionally performs computation on it." ;-)

Computers aren't great, because they can compute numbers faster, they're great because we've managed to encode text, audio, video, geospatial, etc. information as numbers, which allowed us to perform complex text, audio, video, geospatial, etc. operations.

PeterStuer · 3 years ago
Most of the people asking the question "to do _what_, exactly" seem to have not much of a "knowledge worker" experience. Every non-tech person I showed GPT4 to was after a few tests immediately using it to speed up parts of their daily work.

Most impactful is how it destroys the blank page/getting started barrier, second how easy it is to substantially change/adapt/refocus produced work.

It is like having an incredibly efficient, patient and encyclopedic junior collaborator 24/7 at your disposal. It can't be trusted to fully automate without knowledgeable supervision, but it saves a boatload of time and effort.

notahacker · 3 years ago
> Most of the people asking the question "to do _what_, exactly" seem to have not much of a "knowledge worker" experience.

From personal experience I'd say the opposite: GPT lacks the specialist knowledge to produce useful writing or yield accurate answers in any of the markets I've worked in (I'll grant that less niche markets exist, and that GPT is pretty good at fixing the writing of people that lack English language writing skill) and it seems like people egging GPT as replacing most of those roles are all showcasing hypothetical "generate a website for an imaginary product with minimal brief" kind of situations which GPT excels at because there aren't any real world knowledge worker constraints imposed on their solution. That's definitely not to say it has no use, but lots of less technologically impressive accomplishments like data entry wizards and templates also have use without being considered transformative.

fader · 3 years ago
> Most impactful is how it destroys the blank page/getting started barrier, second how easy it is to substantially change/adapt/refocus produced work.

This is the killer feature of GPT for me. I'm very, very good at optimizing and solving problems within specific domains, but terrible at picking a direction with no boundaries. (Pick a theme for a costume party and I'll have the most interesting costume. Throw a Halloween party and I'll show up in jeans and a t-shirt.)

I recently wanted to submit a conference talk, but wasn't sure where to start. I gave ChatGPT a list of possible topics and some general guidance about what I thought was interesting and asked it to suggest topics. I picked one from its list and asked for tweaks, then "discussed" with ChatGPT for a few more rounds until I had a very clear idea of what the talk would be.

I don't feel like that's cheating. I'm still going to create and give the talk myself. But if I had to come up with the topic and abstract on my own with only a blank sheet of paper to start with, I'd never have submitted it.

anotheryou · 3 years ago
It's just good enough and approachable. But I agree, for what I do professionally it just doesn't even come close and I'm already quite efficient.

From me and around me:

- marketing asked me to show them the ropes of mid-journey yesterday, boss said "this will be the face of our new product" to one of my hasty "creations".

- mom learns english with chatGTP because she finished duolingo

- I wrote a PoC demo for a prompt engineering tool and gpt4 demo chat in a day. That would have taken me days without gpt4. (material design, storing in local storage, gobbling up the data specification, save everything on change not with buttons)

- First draft for some diagrams in mermaid js worked well too, or converting from flow diagram to a swimlane diagram.

- All kind of personal data cleaning: dirty list of emails > ready to paste in mail client, wall of text with broken new line characters > sub-headlines and paragraphs

- virtual assistants (e.g. company chat-/voicebots), needs a bit of tooling but gpt4 is totally ready for it (apart from latency and price)

- 30 minutes to a browser-add-on that marks tweets as "seen" so I can skip them if I scrolled past them before. (userscript to be precise)

- understanding tax regulations %)

What I'm waiting for:

- better knowledge ingestion so it can use my notes

- personalization over time

- good dev-ops integration (push and deploy for me too).

- maybe something to have better separation of concerns on code so the messiness matters less, not sure if possible :)

- remote control my screen

- running an LLM locally

- have it build its own plugins for any website or service

wuiheerfoj · 3 years ago
I‘ve been using ChatGPT to:

- write and explain me more optimised algorithms for certain cryptographic operations

- explain funny mathy bits of papers that I don’t understand

- plan me a few days of activities for a city holiday

So far it’s been great on all accounts!! I was able to get a faster turnaround time in understanding the papers than I would if I were probing a colleague

twright0 · 3 years ago
What gives you confidence its explanations are accurate?

> write and explain me more optimised algorithms for certain cryptographic operations

This domain in particular strikes me as a poor choice for this approach. "Don't roll your own crypto... but definitely don't let a language model roll it for you, either"

otabdeveloper4 · 3 years ago
This post is a clever joke, right?

Right...?

bonestamp2 · 3 years ago
> I’m asking what people want the models to do every day, all the time

I know three regular people using ChatGPT, here's how they're using it:

1. Franchise consultant: uses it to research opportunities, has it write business letters to franchisees he wishes to contact. Saves him time and is a better writer than he is.

2. Immigration lawyer: uses it to summarize info and write emails to clients. Saves her a ton of time.

3. School teacher: uses it to write report card and assignment feedback. It doesn't save any time at all, but the output is more elegant than if he wrote it manually.

alexvoda · 3 years ago
You are overlooking an important use case: SPAM.

Spam of all kinds and at automated-industry scale. Imagine blog-spam written in a variety of styles so you can no longer easily identify it as such. Imagine chum-boxes that no longer repeat themselves and are harder to identify as such. Imagine ads masquerading as content, as it already exists, but at scale.

And every time you slip up and click on one, it will learn a little more about you and create chum content better tailored to you.

Generative AI will facilitate a flood of algorithmic spam.

This kind of spam already exists, it just isn't scaled because it still takes time and resources to create and the people creating it are not the brightest, so for now it is easy to identify.

nr2x · 3 years ago
Natural language is now a fully functional user interface out of the box. This is bigger than the mouse.
bmitc · 3 years ago
Fully functional in what way? As far as I can tell, ChatGPT is a box that I put sentences into, and I get grammatically correct sentences that contains topics or words loosely statistically correlated to what were in my sentences, that may or may not be correct and often are not. The box has little to no memory. I honestly don't see what's so useful about this box.
mmcwilliams · 3 years ago
I think the problem is that people working in the field have a much different definition of what "fully-functional" means than users who believe that they're communicating with an intelligent, all-knowing or otherwise infallible being.
barrysteve · 3 years ago
In the long run, it will 'secure' knowledge and data. Nobody will know what's in GPT's databases.

Currently you can command kagi/google/http websites to return information. You can infer what should be in Google's search engine and track when information is deleted.

GPT is not commanded, it predicts with inaccuracy. So anybody who wants to black-hole information behind the scenes and never reveal clues to that fact, can do so.

All failed predictions are covered by LLM's design, you cannot infer without serious long term study that something has been removed deliberately. You cannot infer that a valid data entry exists and you failed to retreive it, because unverifiable bs is the default failure state from LLM's.

High level tech people will invest in this, regardless of what the public values in it. Just like Elon's SpaceX and Tesla got lifted out of pitfalls by gov and VC, so too will the AI guys.

Let me put it this way. Hoarde and backup every scrap of online information you care about. Hypothetically an LLM fueled replacement for all of 'the free and open web' websites, could limit information availibity.

A metaphorical example would be leaving out Tianamen Square. Which is fine when you can just Google it, but with the old freedom of information gone, an LLM has the ability to just bs you and you'd never have a reason to infer it existed in the first place.

It's a Super-Injunction by default, a perfect repository for spies to dump data, a librarian who will answer any question but only answer with the truth, if he likes you.

No more Snowden and Assange leaks, there's no way to chase up a deleted video with a search engine.

Anyway you get the idea. In the long run, the structuralists are licking their wet lips at the thought of re-establishing a heirarchy of information access. (Probably, i don't know).

tomca32 · 3 years ago
> I’ve seen examples of “write an email to my boss”. It would take me longer to explain to ChatGPT what I want than to write these myself.

You’re a better writer than me also English isn’t my first language.

Besides that I used chatgpt to write a eulogy for my father. I know roughly what I wanted to say but I couldn’t find a good way to say it. Chatgpt helped me there even in my own language (Croatian) and there is just no way I could’ve made it as poetic as it did.

teaearlgraycold · 3 years ago
For me it’s a perfect replacement for Stack Overflow. Except every solution is tailored to my exact code and situation. I’ve even gotten it to walk me through things like installing WSL 2 without using the Microsoft Store after I nuked all Appx packages.

Maybe my favorite was pasting in pages of documentation describing all of the error codes for a library (the docs are the only source of truth) and getting it to output a very good typescript enum.

csmpltn · 3 years ago
You don't need a full-blown LLM to find answers to questions on StackOverflow. You can do that today with any search engine.

> "I’ve even gotten it to walk me through things like installing WSL 2 without using the Microsoft Store after I nuked all Appx packages."

That's a search engine query you can issue already today, without involving any LLMs. It will cost a fraction of the cost of running this with an LLM, and it'll actually bring you to the "source" of the information (a thread on StackOverflow with the full context - including "wrong answers" which are just as useful) - unlike an LLM.

Deleted Comment

pg_1234 · 3 years ago
Probably because much of the time it's just a (hopefully correctly) curated subset of stack overflow ... tailored to your current focus.
throwaway2037 · 3 years ago
Please write a blog post about this! That is an amazing story.
TheAceOfHearts · 3 years ago
I have a friend that has been using ChatGPT to help him write technical books which collect existing niche information and present it in a way that's more accessible and organized, in Spanish. You'd be amazed by how many small niches there are without robust technical books and well organized resources. And while the information is all "out there", it's often not really easily accessible to beginners.
kolinko · 3 years ago
Text redaction - if any part of job is about writing, you can now just brain dump everything, structure it more or less, throw into chatgpt, and make it produce a clear and readable article. In my case, the output is better than anything I would ever write.

Ditto all the redacting work in newspapers, intranet etc. The whole field of proofreaders was virtually extinguished overnight.

Marketing agencies - and I spoke to a few - increased their workers' producitivity 2-4 times (sic!), virtually overnight. Anything from writing briefs to writing copy.

Programming - most of my programming work is deep algorithms, so not much help here, but for writing boiler plate code with new APIs, or writing in a language that I'm a bit rusty in, chatgpt is better than anything else.

Customer service helplines / chatbots (and the same for intranet) - we don't see it just yet, because it takes a bit more time to build a good system, but there are probably thousands projects right now worldwide building those for any niche concievable.

Business intelligence - we used, with success, ChatGPT in our deep tech seedfund, for helping out with initial project ddil.

And, essentially, rubber ducking, but for every single field out there. I just discussed with a psychiatrist how he can use even boilerplate GPT-4 as an additional consultant. You need to be aware of limitations of course, but it is already immensely useful in it's current form - and dedicated solutions for medicine are coming very soon.

That's the short-term perspective and low hanging fruits. On top of that, you have thousands projects now, that are figuring out how to apply LLMs to specific niches. It was difficult before, because you had to train your own models, and now you can just fine tune the existing ones, do embeddings, or just plain prompt engineering.

Oh, and also synergy with different AI modalities - we've had a massive growth in voice recognition and generation, visual recognition, and so on. And LLMs are a glue that adds a layer of understanding underneath.

thinkingemote · 3 years ago
I hope you discussed data sharing confidential patient health information with third parties, openai etc.

Nevertheless offline models should alleviate some of this.

PeterStuer · 3 years ago
"In fact, no one in my extended sphere of friends and family has asked me anything about chatGPT, midjourney, or any of these other models. The only people I hear about these models from are other tech people."

I have the opposite experience. Everyone I meet outside tech has been exploring ChatGPT, or at least has heard of it and is extremely curious. And in non-tech student circles, a (non STEM) TA I know said students who until a month ago were bad to mediocre (8-12/20 scoring), are suddenly all turning in top 16-18/20 assignments this month. You can argue about what to do given this change, but you can not deny the impact.

lagrange77 · 3 years ago
I have the same experience. And ChatGPT was the turning point. A few month ago, to my surprise i wasn't even able to excite my non-technical environment about DALL-E 2, not to mention other ML models, that would excite technical people.
revelio · 3 years ago
I was sitting in a Starbucks yesterday sitting opposite two girls doing some language study and one of them was joking that the other should ask ChatGPT about some grammar point. This idea that only tech people know about it seems way off.
terpimost · 3 years ago
We are going to have almost on the fly translation of video and audio. We are going to understand and write code faster. Knowledge management will change so instead of being partly a tool which searches the web we will get human readable responses. Language model is about to improve language related things for us, how killer it could be?
wouldbecouldbe · 3 years ago
Code things i've used it on: - I've thrown in jsx react components and asked to transform it to tsx - I've thrown in tsx components and asked it to write some tests - I've asked it to make an offline queue's so user requests are stored and saved once online again - I've asked to rewrite tests from enzyme to Jest - I've asked it to add a full screen function to react-native-video, which is annoying.

Often Im doing something else in the meantime, like coding, in a meeting or having a beer.

Finally we can be drunk and code at the same time ;).

Joking aside, it's been a huge productivity boost, and if you ask things properly will write hugely detailed and correct code.

I've also used to understand other languages/coding I was less familier with, for instance c and sql procedures.

Above only works with v4, 3.5 is to inaccurate. But is indeed slow, small things I can do faster.

With writing articles i've been disappointed so far, even corrections or styles I ask to adjust it puts in back in a few questions later.

However writing children stories for helping my kid learn how to read it's really good, in every language. "Write a story for kids of age ... about ... use the following words". Came up with nearly perfect stories my kid loved.

petesergeant · 3 years ago
I use ChatGPT at least 10 times a day, as a software developer, and I agree with all your assessments about the use-cases you mention. Here are my ten most recent prompts:

* Asking about how back-propagation works with multiple output nodes

* Examples of successful ICOs

* A deep dive into what it means to calculate the gradient of a function, with me asking lots of clarifying questions

* A deep dive into how electricity pricing works in the UK, digging into the market clearing price

* Looking for a generic "unwrap one layer of type" utility type in TypeScript

* "In JavaScript, I want to format a date as YYYY-MM-DD_HH-MM-SS"

* Seeing if there's a more concise way to get Zod to define an item that's required, but the value can be defined, other than using a union

* Ideas for naming two user fields, one that's a changed user, one that's the user making the changes

* Digging into the implications that Camilla is called the Queen Consort, not just the Queen

* "Excel I want to show the weekday as a single letter"

So, basically, it's my go-to instead of Google + StackOverflow

cgearhart · 3 years ago
I’d be interested to see what it said about differentiating a function (and I’m assuming you mean in the context of programming). I consider that one of the dirtier secrets of gradient-based ML.
petesergeant · 3 years ago
Also I've found it very useful for identifying books, TV shows, films, games, etc that I remember some details of (eg, was released around 1994, was about web design) but not all the details of
SanderNL · 3 years ago
You, good sir, have a disturbing lack of imagination.

What I can ask my computer has just leaped from templated strings to raw human conversation.

I have a hard time imagining something not getting impacted.

First of all iPhone autocomplete.. (but I guess any decade old RNN is an improvement there)

epigramx · 3 years ago
It's just a glorified google search. You'd save more time googling what you need and picking 2-3 results to read briefly.

People have no clue human intelligence has very little to do with "statistical analysis of old data".

DangitBobby · 3 years ago
It's actually quite a lot better than a Google search. Yesterday I gave it the output of SQL ddl and told it to write me a script in Go that reads that from a CSV, then I asked it to generate a sample CSV to verify the script.

How is that different from googling? Lots of articles won't bother to include import and the structs they use definitely won't match up to your use-case because their dataset if different. What if my use-case is a bit strange and I need to embed the file instead of reading from the file system, for example? I can ask Chat GPT and it will update the example program, using my exact file name and the variable names for the problem I've described and the program runs as written!

I don't write much Go (so I don't actually care to commit the hello_world.csv ritual to memory), but I know enough about it to verify that the program doesn't have any glaring issues and make my own tweaks as necessary. Saved so much time for me in this scenario.

literalAardvark · 3 years ago
It can't replace search, actually. It doesn't know everything a decent search engine can find, and might not even remember the specific php command line parameter you're looking for.

What it can do is reason some pretty tricky interdisciplinary answers. It's about as useful as having an intern that's finished literally every white collar college out there but has no idea what they're doing. It takes some prompting to get useful work out of it but it is possible and it is very effective when you get it right.

The $20 sub has saved me quite a bit of time, the main challenge is remembering to set it to GPT4 for every damn conversation, as the previous models are trash.

HDThoreaun · 3 years ago
Google search is a trillion dollar product.

Dead Comment

what-no-tests · 3 years ago
I have read a lot of books (sociology, psychology, philosophy, history, religion, etc), and often I'll have an insight that stands in apparent contradiction with some concept I learned in one book but in alignment with a concept I've learned from another.

I use ChatGPT to understand these concepts, their background, and to bridge the gap between them.

Getting to the bottom of what Jean Baudrillard really was saying in "Simulacra and Simulation" and applying that to what Larry Wall meant when he said that Perl is the first postmodern language is really something!

Deleted Comment

Irishsteve · 3 years ago
similarly what’s the point of stackoverflow or wiki when you can just get the answer you want from a book in the library.

What I value about these lls is that they are essentially a condensed version of the internet (despite being stupidily large for normal hardware currently)

Usually if I’m building a recommendation or search algorithm I have to use the data from the company I’m working with. This makes it possible for me to encode the entire internet into a model that might be running on a product that only has 100 users.

deltree7 · 3 years ago
LLMs will make average and below average writers / marketers / PR to become extremely good.

You no longer can mock "Please do the needful" from $1 / hr employees from India. They will be communicating at the same level as an average American and the smart ones can completely take over large fields.

A single person, can wear multiple hats and not get blocked.

This can apply at global scale of at least 4 Billion.

Just because you lack imagination doesn't mean it isn't good

verelo · 3 years ago
> I’ve been shown some neat pictures people made that they thought were cool. I don’t know that I need this every day.

At least they’re less obsessed with apes than last summer…

olalonde · 3 years ago
To be honest, I can't tell if you are genuinely wondering or just (virtue) signaling that you're not using it, but here's a sample of my own usage: https://news.ycombinator.com/item?id=35299121
cgearhart · 3 years ago
Not virtue signaling, but I seem to be asking a different question than people are answering. I’m asking what the product opportunity is and people keep telling me examples of tasks that they use it for.

In many cases the examples are one-off and the only product opportunity is the generative model interface itself. Looking broadly over the replies what I’m seeing is “there’s a thing that used to fail the cost/benefit test, but now the cost is so low that I can automate these things”. So part of my problem is (1) the small benefits of these tasks mean the value proposition comes from volume—that probably comes from the generality of the task engine, and (2) there may be some niche product opportunities on top of the model platform, but the primary big winner here is the platform itself. (That’s not necessarily a new insight, but it seems especially true here.)

The terrifying part is how often I hear people in this thread and elsewhere mentioning tasks that are not fault tolerant to the failure modes of these models. (For example, I had a coworker tell me their relative is a doctor using ChatGPT to diagnose patients.) People keep focusing on the risks of AGI killing us all with paperclips, but I’m much more worried about getting run over by some idiot asking ChatGPT to drive their car.

prox · 3 years ago
For me, it gives an inkling of a web that existed before, with less friction. Instead of going to a webpage with verbose ad-ridden fluff, I get a more or less frictionless answer, very specific to my question.

Funny enough I don’t use it a lot for programming, maybe just to jog my memory on a topic.

bionhoward · 3 years ago
The thing has enabled me to upgrade a major code project into an exciting direction in a short amount of time. Paradoxically I find it does better with more difficult tasks because the prompts tend to have more detail. the thing can generate semantic cypher queries for combinations of combinations of combinations of elements if you want, it’s ridiculous at coding already and only gonna get more insanely good at it in the future. Yeah, sometimes it forgets context and you have to guide it a little, it still can rewrite a whole module in a new style with lots of specific changes under 2 minutes, good luck coming close to typing that fast even if you did know exactly what to write, coding feels pretty revolutionized overnight right now for me tbh
elcomet · 3 years ago
I don't have the same experience as you at all. Most of my non-tech friends or family members asked me about chatGPT. You don't find it useful to write emails maybe because you're a native speaker, but it is extremely useful for non-fluent english speakers.
unboxingelf · 3 years ago
I defined an interface and asked ChatGPT to implement it, in Go, as method receivers on a struct named “repo” against sqlite. The code compiled and I wrapped it in unit tests to ensure it worked. Overall saved me an hour of boilerplate, would recommend.
martindbp · 3 years ago
What is the killer app of the internet, or the smartphone? It's a silly question. It's many little things, some foreseen and some not.

One thing you're missing is that we now have a pretty good solution to any NLP pipeline that in the past you'd have to spend months to get right. You can probably still get better results by supervised training on specific tasks but it's good enough. NLP (as we knew it) is dead. This will take some time to show in the applications we use, as people figure out how to use and integrate it, and costs need to come down, but it will make it trivial to add smart functionality for things you previously needed an in-house ML team for.

version_five · 3 years ago
Sorry, one other thing, as an actual AI researcher:

These generative models, whether NLP or vision, are cool but are really examples from a very narrow field. Most ML researchers and practitioners are working in completely different areas and would not obviously benefit from the new generative models (which are themselves prodictized extensions of existing tech trained on more data) so nothing's going to change day-to-day. If you were working on some alternative general purpose generative model, maybe you got scooped. Otherwise it's business as usual.

The "revolution" is happening for tech savvy non-ML people, "tech-bros" colloquially.

koonsolo · 3 years ago
I'll ask you another question: What are people searching for on google right now that can't be answered by ChatGPT? I would say 95% of searches can be a simple question & answer of ChatGPT. I've tried it, it works.
throwaway2037 · 3 years ago
Note: When I write "Midjourney" below, I mean any AI/ML-based art generation tools.

Art generation is going to be groundbreaking for the advertising industry. Basically, you can hire summer interns from arts academies or liberal arts schools to type cool stuff into Midjourney to generate amazing art for your ads. You don't need to pay for real artists, who, sadly, their original artwork was used to train the model.

The same can be said for low-end graphics arts: Stuff like: Make me a greeting card for this big event. Today, you need to pay a graphic artists to whip something up in 2-72 hours. That will be replace by someone with modest English skills working in a call center somewhere in India or Philippines. They might chat with you for five mins (voice or text chat) to brainstorm ideas. Then, they will "drive" Midjourney and put together a nicely-themed party invite.

On the more advanced side, I do think artists will browse the "best of" portfolios on Midjourney (and others) to get new ideas. They might also use Midjourney to get a head start.

The next logical step beyond Midjourney is to generate the same image, but as a 3D model. I think people (myself included!) really (really, really!) underestimate the cost of creating 3D models for films, adverts, and games. If Midjourney could give you a starting model, you might save hours (or days) of work.

Next-Next: Midjourney can provide basic animations of the same 3D model. Again, you can download in a wide variety of formats, so that you can import and tweak as necessary. Think: Pay to play. Rendered as GIF is cheap as hell, but download advanced CAD format with 50K points in the model might cost 100s of USD. (Remember: You are paying for a SaaS engine, not an expensive, talented artist.) Imagine: "Hey Midjourney, I need a 60 second animation of cute animals, like Animal Crossing, sitting at a table enjoying our new brand of tea called 'It's Great Tea Meet You!'." (Use ChatGPT to generate the first draft of the script.) Writing that just made me think: Ok, now you can add voiceover.

The possibilities for the commercialisation of generating still and moving picture are nearly endless, and many will be useful for the advertising, film, and gaming industry.

yummypaint · 3 years ago
I think the killer app will involve adding more layers of AI to make something that is closer to an agent than a simple prompt completion engine. Maybe a more self-contained system that has an internal cycle of asessment, generating prompts, executing output, and readjusting prompts. These agents could conceivably work on long term, complex projects with minimal supervision or interaction. If a general purpose system can get a foothold and be even 10% as good as a person, it could be enough to fundamentally reshape the workforce.
danschuller · 3 years ago
One thing I find it really great for is language learning. It can create examples, explain grammar, contrast to languages you already know etc. Also I'm not a sys admin but occasionally have to do sys admin things and I ask for the shell commands I need in English, it is far far faster than using stackoverflow etc. (i.e. pulling logs from multiple servers and extracting bits) Also I don't want if I want to use a library I ask for what how I want to use it and get a small example entirely tailored to me.
iLoveOncall · 3 years ago
My girlfriend and I use it to turn a short bullshit paragraph of corporate lingo into a bigger one. It's great for that, but that's it.

GPT will only ever be a good writing support tool.

Deleted Comment

gonzo41 · 3 years ago
In the content space. There's whole publishing industries that are at risk from GPT style models.

Would reddit, or HN still be interesting if there were bots that talked in the popular tone of the subreddit or thread that dominated the conversation?

AI's effect will be like pollution to our culture. It'll speed up and help alot of things and be very helpful generally. But It will create problems in 'human' spaces.

cma · 3 years ago
> I’ve been shown some neat pictures people made that they thought were cool. I don’t know that I need this every day.

3d and motion and this becomes the holodeck, you describe a scene and it creates it direct to your vr goggles.

Generative AI that can take in a scene and alter it rather than fully create it becomes real augmented reality from sci-fi: not just a HUD or greenscreened in elements, major transformations.

patrick451 · 3 years ago
There is no killer app because chat gpt is largely useless. It's results might be right, half right, half wrong, or totally wrong. All it's good for is spouting bullshit. It's like talking to a really outgoing, really confident used car salesman.
th3byrdm4n · 3 years ago
I’ve got a codebase written in the extreme programming paradigm that has become a linchpin for operating our enterprise.

I can fire up an AI to build out my missing unit tests in 10 minutes what would take a developer 3 weeks to accomplish.

Scale that for 155 different projects … that’s over a year of development time in about 24 hours of compute

simion314 · 3 years ago
Imagine a code/progtamming AI that I can ask to code review my code and find my mistakes, write unit tests for my new functions.

I am tempted to write a small program to use to fix my Internet comments and fix my bad English exprimattion , I done some tests and I see that ChatGPT find the places in my comments that can be improved.

r0b05 · 3 years ago
Instead of looking at use of an LLM in isolation, consider the value it could have in the future, combined with API's, other models, speech interface, etc.

Sure, it can help you write an email now, but the real magic could be when these things come together in a symphony of intelligence.

seydor · 3 years ago
Yeah it is tiring, currently.

But soon people will just enter one goal, "make me money" and the program will go on a loop, pausing only when making an important decision to get the approval of the owner

Also, the input buffer of LLMs is increasing. Soon we will be able to begin with "write me a full 3D game"

jedberg · 3 years ago
> In fact, no one in my extended sphere of friends and family has asked me anything about chatGPT

A couple of weeks ago they wrote an episode of South Park with ChatGPT (and it was about ChatGPT). It's definitely gone mainstream amongst students who are using it to do homework.

satvikpendem · 3 years ago
I use it for APIs. Specifically, I use it to help think about how I should architect my APIs, as well as how I should implement them. I can ask ChatGPT to do both things for me. Even if it's somewhat wrong, it still gives me a good enough direction to go on.
golergka · 3 years ago
Almost all office work is just reading words in boxes and looking at pictures and then writing words in other boxes.

There's insane potential to automate it. Think b2b, not b2c.

acdha · 3 years ago
“Almost all” makes me question how much experience you have doing those jobs. They exist but work which is truly that rote has been getting automated for half a century so what I’ve found is that what you see now tends to deceptively appear simple while hiding judgement, uncertainty, and inference.

That’s especially true for things which involve liability. If Google builds a system to recommend YouTube videos and ads, it’s a win as long their error rate is below a certain level. If it’s your insurance company rejecting claims, however, people can die that way and the lawsuits for breaking contracts or legal standards can far exceed the believed savings.

machiaweliczny · 3 years ago
Killer app is that it's glue natural language code. It's not realised yet but it's gonna be BIG. It's simply gonna be copilot for everything.
logicchains · 3 years ago
Have you heard of "editing"? Or "translating"? Those are entire industries that can now be almost entirely automated with no loss of quality.

Deleted Comment

Salgat · 3 years ago
It replaces googling. All the hard work of finding relevant info and piecing it together is done for you. For programmers it's just a faster stackoverflow.
brundolf · 3 years ago
I have a group of nontechnical friends who use Midjourney constantly for D&D and similar games (both brainstorming and visuals for players)
Robotbeat · 3 years ago
Ask college kids. They’re all using it for everything.
bottlepalm · 3 years ago
For a non native English speaker it's amazing. It can take any very broken blob of text and turn it into perfect English.
chociej · 3 years ago
I use it to plan meals and make grocery lists each week. It's very good at that task, and it saves me a lot of monotony.

Deleted Comment

tomkarho · 3 years ago
> I’m asking what people want the models to do every day, all the time.

Rule 34. When in doubt, always defer to rule 34.

CalRobert · 3 years ago
Me? It takes the sql for dozens of DBT models and spits out the schema yml so I don't have to.
chasil · 3 years ago
Parent article said the pressure was "be the first or be the best."

The proper answer is "to be correct."

Thousands of Africans sat at their desks, trying to achieve correctness.

This will become a (correct) commodity, but not today.

(Unless you are really good at differential equations.)

IanCal · 3 years ago
Scrolling back through some discussions with chatgpt and a few things with bing (later ones more likely to be gpt4)

* Explaining what a do/while(false) loop was for as I'd not seen that construct before

* Discussing what DIDs were and how web DIDs worked as the RFC was very detailed in a lot of areas I just didn't care about. A discussion with a pretty knowledgeable person in the area was what I needed, and what I got. It explained what the well-known part was, explained my confusion around resolving to a document and resolving to a resource (where I was mentally stuck).

* Creating a learning plan, diagrammed by mermaid, for progressing in bouldering. Each major step broken down into sub-parts to practice

* Finding https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem given bad explanations that didn't lead me to the right place in google

* Finding https://blockprotocol.org/ given a shaky memory of "some composable ui framework by someone famous" iterating a little with bing. I had failed to find it before manually.

* Explained and created a table of bouldering gradings as the place I go uses a different one to the videos I see

* Discussed project ideas to do with my son, gave me great ideas around electronics that I think are a good fit as well as a few other things. The most useful part here was being able to say "that's too generic, I need proper projects" and "that's too simple for him" and have it update. It then also created some good explanations at different levels about how radios work,

* General discussions about long term impacts of LLMs, potential use cases

* Career advice

* Generating art that we'll be getting commissioned for the house

* NER without any coding

* Generating ember templates, CSS and example filling data for a custom framework given a problem statement (what I'm actually building right now)

* (edit) I just took requests from my kids and made them some colouring in pages with robots, firefighting robots, lego ninjas, owls, frogs and crabs.

> In fact, no one in my extended sphere of friends and family has asked me anything about chatGPT, midjourney, or any of these other models. The only people I hear about these models from are other tech people.

I've had family members tell me they've used it to create reports, and used it to create marketing copy, a website and lecture slides for others.

ckw · 3 years ago
Needed to safely move a bulk carrier on short notice recently:

Q: Describe turning 180 degrees and continuing down stream with the current in a heavily loaded bulk carrier from a stationary position tied to a dock with five knots of downstream current using two 50 ton Ballard pull tractor tugs. Be very detailed and specific in describing the actions required to complete this action safely.

  A: To safely turn a heavily loaded bulk carrier 180 degrees and continue downstream with the current from a stationary position tied to a dock with five knots of downstream current, using two 50-ton bollard pull tractor tugs, you must follow a series of steps. These steps are designed to ensure a smooth and controlled maneuver while minimizing the risk of collision, grounding, or damage to the vessel, dock, and tugs.

  1. Preparation: Prior to beginning the maneuver, ensure that all necessary communication equipment is functional and tested. This includes VHF radios, signal flags, and/or lights. Establish communication protocols and channels with the tugs and any other relevant parties, such as pilots and harbor control. Additionally, check the weather forecast, tidal information, and any local regulations or restrictions that may apply.
 
  2. Positioning of tugs: Position one tug at the bow and the other at the stern of the bulk carrier. The tug at the bow will primarily be responsible for pulling and steering the vessel, while the tug at the stern will act as a brake and assist in maneuvering the vessel as needed. Securely attach the towlines to the designated towing points on the vessel.

  3. Cast off lines: Once the tugs are in position and ready, instruct the crew to release all mooring lines from the dock. As the lines are released, the tugs will need to maintain tension on their towlines to keep the vessel in position and prevent it from drifting downstream with the current.

  4. Initiate the turn: The tug at the bow will start the turning process by pulling the vessel's bow away from the dock and into the current. At the same time, the tug at the stern will push the vessel's stern towards the dock to create a pivoting motion. Both tugs must continuously adjust their power and positioning to maintain control of the vessel as it turns.

  5. Manage the current: During the turn, both tugs must work together to counteract the effects of the five-knot downstream current. The tug at the bow will need to pull harder to overcome the current, while the tug at the stern will need to push and also act as a brake to prevent the vessel from gaining too much speed.

  6. Monitor the turn: Throughout the 180-degree turn, the vessel's crew and tugs must maintain constant communication to ensure a smooth and controlled maneuver. Monitor the vessel's position, speed, and proximity to the dock, other vessels, and any navigational hazards. Adjust the tugs' power and positioning as needed to maintain control and avoid collisions or grounding.

  7. Complete the turn: As the vessel approaches the completion of the 180-degree turn, the tugs will need to adjust their power and positioning to slow down the vessel and align it with the downstream current. The tug at the bow will ease its pulling force, while the tug at the stern will continue pushing until the vessel is properly aligned.

  8. Proceed downstream: Once the vessel is aligned with the current and moving downstream, both tugs will gradually reduce their power to allow the vessel to continue under its own momentum. The tugs will remain close by to provide assistance as needed until the vessel is safely clear of the harbor and any navigational hazards.

  9. Release tugs: When the vessel is at a safe distance from the dock and navigational hazards, and under the control of its own propulsion, the towlines can be released, and the tugs can return to their base.

Deleted Comment

Deleted Comment

fdgsdfogijq · 3 years ago
I work on a research team in FAANG. What it really feels like is one company made everyone else obsolete. And we are going to work working on NLP models that underperform ChatGPT by a huge margin. Twiddling my thumbs and keeping quiet while no one wants to recognize the elephant in the room.

Also, there is no "working in AI", a few thousand people are doing real AI at most. The rest of us are calling an API.

lacker · 3 years ago
This reminds me of back in the mid 2000's, there were a lot of smart people working on search algorithms at different companies. But eventually, you'd talk to someone smart working on Yahoo Search, and they would just be kind of beaten down by the frustration of working on a competing search engine when Google was considered to be by far the best. It got harder for them to recruit, and eventually they just gave up.

So... I don't know where you're working. But don't twiddle your thumbs for too long! It's no fun to be in the last half of people to leave the sinking ship.

Deleted Comment

UncleOxidant · 3 years ago
Why do you think OpenAPI is so far out in front? It's not like there's a lot of secret sauce here - most of this stuff (transformers,etc.) is all out there in papers. And places like Google & Meta must have a lot more computing resources to train on that OpenAI does thus they should be able to train faster. Do you think OpenAI has discovered something they haven't been open about?
JumpCrisscross · 3 years ago
> Why do you think OpenAPI is so far out in front?

There is a network effect forming around its models. The strengths of its kit speak for themselves. (It also cannot be understated how making ChatGPT public, something its competitors were too feeble, incompetent and behind the curve to do, dealt OpenAI a massive first-mover advantage.)

But as others note, other models are in the ballpark. Where OpenAI is different is in the ecosystem of marketing literature, contracts, code and e.g. prompt engineers being written and trained with GPT in mind. That introduces a subtle switching cost, and not-so-subtle platform advantage, that–barring a Google-scale bout of incompetence–OpenAI is set to retain for some time.

fdgsdfogijq · 3 years ago
I dont work at Google, I think other FAANGS underinvested in this area as they didnt think it was promising. But I will admit, I am suspicious that Google is incompetent. Probably they can come back given how much money they will be forced to throw at it. But Bard is clearly behind and I dont believe their "abundance of caution" arguments for why Bard cant even code.
redox99 · 3 years ago
I don't know about GPT4, but GPT3.5 I'd bet is pretty traditional and boring. It's power comes from a really good, properly curated dataset (including the RLHF).

GPT3.5 turbo is much more interesting probably, because they seem to have found out how to make it much more efficient (some kind of distillation?).

GPT4 if I had to make a very rough guess, probably flash attention, 100% of the (useful) internet/books for it's dataset, and highly optimized hyperparameters.

I'd say with GPT4 they probably reached the limit of how big the dataset can be, because they are already using all the data that exists. Thus for GPT5 they'll have to scale in other ways.

black3r · 3 years ago
For a little more than a year I worked in an AI startup doing basically everything other than AI (APIs, webapps, devops...), but from what I've seen there the "secret sauce" to AI success is the training process (dataset, parameters, fine-tuning steps, ...). And OpenAI isn't open about theirs since their beginnings.
sangnoir · 3 years ago
> Do you think OpenAI has discovered something they haven't been open about?

They have not, which makes me curious about which company gp works for because the "F" and "G" in FAANG are publicly known to already have LLMs. Not sure about Amazon, but I'm guessing they do too.

As an outsider, the amazing thing about ML/AI research is that you get a revolutionary discovery of a technique or refinement that changes everything, and a few months later another seminal paper is published[0]. My bet is ChatGPT is not the last word in AI, and OpenAI will not have a monopoly on upcoming discoveries that will improve the state of the art. They will have to contend with the fact that Google, Meta & Amazon own their datacenters and can likely train models for cheaper[1] than what Microsoft is paying itself via their investment in OpenAI.

0. In no particular order: Deep learning, GANs, Transformers, transfer learning, Style Transfer, auto-encoders, BERT, LLMs. Betting the farm on LLMs doesn't sound like a reasonable thing to do - not saying that's what OpenAI is doing, but there are a lot of folk on HN who are treating LLMs as the holy grail.

1. OpenAI may get a discount, but my prediction when they burn through Microsoft, they'll end up being "owned" by Microsoft for all intents and purposes.

majormajor · 3 years ago
A lot of FAANG data folks aren't on the teams that were doing research into this stuff and weren't using the latest fruits of that research.

OpenAI has released a ton more easy-to-use-for-everyone stuff that has really leapfrogged what a lot of "applied" folks everywhere else were trying to build themselves, despite being on-the-face-of-it more "general."

rqtwteye · 3 years ago
I think it’s the way things go usually. The big players have a business to run so they can’t focus much on innovation. OpenAI has the only purpose right now to push AI and nothing else. Once they have a real business they will also slow down.
letitgo12345 · 3 years ago
They have been collecting human feedback data for 2 years + probably have a lot of data from Copilot + are training with large context models + have invested a ridiculous amount in curating pretraining data -- the kind of stuff that won't get you a ton of pubs (so you won't see Google researchers having focused on it a lot) but apparently turns out to be super important for a good LLM
logicallee · 3 years ago
All of the neural network architecture for human level thinking and processing, including vision, speech, emotion, abstract thought, balance and fine motor skills, everything was publicly released in April 2003, twenty years ago this month. It's a 700 megabyte tarball and sets up an 80b parameter neural network.

What? Huh? Yes the human genome encodes all human level thought.[1] Clearly it does because the only difference between humans that have abstract thought as well as language capabilities and primates that don't is slightly different DNA.

In other words: those slight differences matter.

To anyone who has used GPT since ChatGPT's public release in November and who pays to use GPT 4 now, it is clear that GPT 4 is a lot smarter than 3 was.

However, to the select few who see an ocean in a drop of water, the November release already showed glimmers of abstract thought, many other people dismiss it as an illusion.

To a select few, it is apparent that OpenAI have found the magic parameters. Everything after that is just fine tuning.

Is it any surprise that without OpenAI releasing their weights, models, or training data, Google can't just come up with its own? Why should they when without turning it into weights and models, the human neural network architecture itself is still unmatched (even by OpenAI) despite being digitized twenty years ago?

No, it's no surprise. OpenAI performed what amounts to a miracle, ten years ahead of schedule, and didn't tell anyone how they did it.

If you work for another company, such as Google, don't be surprised that you are ten years behind. After all, the magic formula had been gathering dust on a CD-ROM for 20 years (human DNA which encodes the human neural network architecture), and nobody made the slightest tangible progress toward it until OpenAI brute forced a solution using $1 billion of Azure GPU's that Microsoft poured into OpenAI in 2019.

Is your team using $1 billion of GPU's for 3 years? If not, don't expect to catch up with OpenAI's November miracle.

p.s. two months after the November miracle, Microsoft closed a $10 billion follow-on investment in OpenAI.

[1] https://en.m.wikipedia.org/wiki/Human_Genome_Project

andsoitis · 3 years ago
Having a model does not a platform or a product make. You also need users and mindshare.

OpenAI is enjoying first mover advantage around the platformication and product-ification of LLMs.

For instance, why has G not yet exposed some next-level capabilities in mail, in docs, and many of their other properties?

Why do Google Assistant and Amazon Alexa and Apple Siri still suck?

nicpottier · 3 years ago
Until we see otherwise, don't we have to assume there's some secret sauce? Bard doesn't match GPT4 and it isn't for a lack of trying. (though perhaps that will change, so far that's the case)
hackerlight · 3 years ago
It probably is the secret sauce which remains undisclosed. Differences that seem small can lead to large differences in model quality.
diego · 3 years ago
If you try Bard or Claude or character.ai they are not far behind GPT4. They might even be on par in terms of raw LLM capabilities. ChatGPT has better marketing and in some cases better UX. A lot of this is self-fulfilling. We think it's far ahead, so it appears to be far ahead.
qqtt · 3 years ago
ChatGPT is cool and novel, but FAANG's requirements for ML/AI go far beyond what ChatGPT provides as a product. ChatGPT is good at answering questions based on an older data set. FAANG typically requires up to date real time inference for huge rapidly changing data sets.

Working on the practical side of ML/AI at FAANG, you will probably be working with some combination of feature stores, training platforms, inference engines, and so on - all attempting to optimize inference and models for specific use cases - largely ranking - which ads to show which customers based on feature store attributes, which shows to show which customers - all these ranking problems exist orthogonal to ChatGPT, which is using relatively stale datasets to answer knowledge based questions.

The scaling problems for AI/ML for productionizing these ranking models from training to inference is a huge scaling problem. ChatGPT hasn't really come close to solving it in a general way (and also solves a different class of problems).

yanderekko · 3 years ago
Agreed. For my job maintaining real-time models with high business value to be disrupted by a chatbot, an LLM would have to be able to plug into our entire data ecosystem and yield insights in realtime. The backend engineering work required to facilitate this will be immense, and if the answer to that is "an LLM will create a new backend data architecture required to support the front-end prompt systems", then... well, suffice to say I can't see that happening overnight. It will require several major iterative and unpredictable pivots to re-envisage what exactly engineers are doing at the company.

For the time being, I expect LLMs to start creeping their tendrils into various workflows where the underlying engineering work is light but the rate of this will be limited by the slow adaptability of the humans that are not yet completely disposable. The "low hanging fruit" is obvious, but EVPs who are asking "why can't we just replace our whole web experience with a chatbot interface?" may end up causing weird overcorrections among their subordinates.

fdgsdfogijq · 3 years ago
I can tell you that we have applied teams working on open problems, which can be solved out of the box with ChatGPT. Its a huge deal
alfor · 3 years ago
ChatGPT is human level intelligence, it’s not just novel and cool, it’s the thing. Remember, GPT-4 training was finished 6 months ago. Listen to people at OpenAI, their concern is: disruption to the world, UBI, getting people used to superintelligence as part of our world. I think they have quite a few things in the pipeline.

So yes ads optimisation/recommendations still need to be reliable for the time being, but for how long?

zone411 · 3 years ago
I'm quite surprised at how little progress FAANG companies have made in recent years, as I believe much of what's happening now with ChatGPT was predictable. Here's a slide from a deck for a product I was developing in 2017: https://twitter.com/LechMazur/status/1644093407357202434/pho.... Its main function was to serve as a writing assistant.
Analog24 · 3 years ago
Scaling up an LM from 2017 would not achieve what GPT-4 does. It's nowhere near that simple. Of course companies saw the potential of natural language interfaces, there has been billions spent on it over the years and a lot of progress was made prior to ChatGPT coming along.
BulgarianIdiot · 3 years ago
Calling an API doesn't mean no value is captured. There are vastly complex integrations of LLM as a small component in larger systems, with their own programming, memory, task models and so on.

If you think GPT is just about chat, you've misunderstood LLMs.

blazespin · 3 years ago
Folks need to start getting over themselves. It's pretty trivial to get GPT4 to explain how transformers work, where the bottlenecks are in both performance and learning, and start modifying pytorch.

It's really not that complicated. Gatekeeping is so over.

xnx · 3 years ago
Not sure why LLM would make Facebook (ads), Apple (hardware), Amazon (hosting, retail), Netflix (tv) obsolete. It's definitely something Google needs to think about, but there's no reason to think they won't again be the leader soon.
atonse · 3 years ago
I actually think Apple is in a unique position here again with the hardware/software integration.

Once again, their ability to do computation on device and optimize silicon to do it, is unparalleled.

A huge Achilles heel of current models like GPT-4 is that they can’t be run locally. And there are tons of use cases where we don’t necessarily want to share what we’re doing with OpenAI.

That’s why if Apple wasn’t so behind on the actual models (Siri is still a joke a decade later), they’d be in great shape hardware-wise.

letitgo12345 · 3 years ago
Imagine Walmart launching a ChatGPT interfaced bot for shopping that customers take a liking to. Walmart starts acquiring both new customers as well as high quality data they can use for RLHF for shopping. Eventually Walmart's data moat becomes so big, that Amazon retail cannot catch up and customers start leaving Amazon.

For AWS, if MS starts giving discounts for OAI model usage to regular Azure customers, that's gonna be a strong incentive to switch

For Apple, A Windows integrated with GPT tech may become a tough beast to beat.

anon7725 · 3 years ago
Can confirm. People are scrambling to remain relevant.
TechnicolorByte · 3 years ago
How does that manifest specifically?
VirusNewbie · 3 years ago
I work at a FAANG and our unreleased models are fantastic. Now, there might be panic about how to productize it all, but tech wise i'm pretty surprised how good they are.
Robotbeat · 3 years ago
Not releasing the models may be the same as the models never existing in the end.
letitgo12345 · 3 years ago
Hope so. Don't want a single corporate entity (OAI/MS) dominating the entire economy. This sector desperately needs competition
tayo42 · 3 years ago
Sounds like the same thing that happened with datacenters? No one has ops or hardware sysadmins, no one sets up large networks except a few in those centralized cloud companies and couple other niche uses. website ops job changed
carabiner · 3 years ago
Not just NLP, even 3D art: https://www.youtube.com/watch?v=SzGEfYh9ITQ

Top comment: I love seeing my job get transformed from 3D artist into prompt writer into jobless in a year or less, yay!

Washuu · 3 years ago
I do a lot of stylized 3D art: I still have time before AI figures that out!~
rlt · 3 years ago
> Also, there is no "working in AI", a few thousand people are doing real AI at most. The rest of us are calling an API.

I would call that “applied AI” and there’s no shame in figuring out novel ways to apply a new technology.

2-718-281-828 · 3 years ago
come on, it's not that bad, at least you're doing linear regression
sashank_1509 · 3 years ago
This sub and blog post are all talking about ChatGPT which is no doubt amazing and far ahead of the curve. However I would also point to Metas new vision model:

SAM (Segment Anything): it is so far beyond any other vision model, I actually believe vision will be solved in a few years now. People don’t realize that there was an industry of publishing paper with incremental improvements in small datasets in CVPR that has been completely invalidated by this paper. I’ve seen engineers in Cruise segmentation team, say Metas new model seems to work better than the in house models they developed and that they should build on top of this. I’ve worked in Tesla Autopilot before and saw it hit a mannequin because we never had mannequin in our dataset before ( we might have had it in the data but it was not a part of our ontology for the network to predict). One approach to mitigate this was OpenAI’s clip that used the English Language as classification labels but Meta’s SAM is so much better where it detects objects without need to specify language. It just understands scenes and objects, at a fundamental level, it can detect anything in a picture if you prompt it right. Honestly it feels a lot like GPT1 which was also ignored by most. If you prompt it right you can get it to segment anything in an image, but prompting it right requires human input. However I can imagine the third or fourth version, with some RL sprinkled in just working zero-shot on complete pixel understanding of any image in the world. This was one of the holy grail of computer vision, that we are seeing solved right in front of our eyes

mitthrowaway2 · 3 years ago
There seem to be a lot of breakthroughs being announced right now. My guess is that after ChatGPT, companies like Meta pushed their teams to get their own projects out the door which they'd already been working on for some time. SAM and Zero-1-to-3 (https://news.ycombinator.com/item?id=35242193) are incredibly impressive new projects in the image comprehension side. Then of course there's all the stuff happening with Midjourney/Stable Diffusion. And DeepMind's AlphaCode.
sashank_1509 · 3 years ago
I’d say after ChatGPT’s success every other company is focusing far more on scaling up than coming up with new methods. SAM is basically the method they came up with in 2021, but scaled up to 1 billion masks, which was not something Meta might have been willing to do before ChatGPT
ModernMech · 3 years ago
> I’ve worked in Tesla Autopilot before and saw it hit a mannequin because we never had mannequin in our dataset before ( we might have had it in the data but it was not a part of our ontology for the network to predict).

Incredible. To be clear, your group and the company you work for saw fit to release your beta-quality autopilot onto public roads, as it was hitting human-looking objects in your test labs. Is that what you're admitting here? Did anyone in your group object to this? Were you personally concerned?

Instead of moving fast and breaking things, what if you had not rushed your autopilot out, but waited on technology to improve to the point that your product wouldn't hit a human-looking object it had never seen before?

sashank_1509 · 3 years ago
There’s too much insanity over building new tech these days, and I find a lot of hate directed on Autopilot, to stem from hate directed on Elon. Let’s go over the basics for Autopilot:

1. It costs thousands of dollars a year + 15k one time and it’s very easy to get banned from autopilot for life. They have a 3 strike rule + 1000 miles driven on your Tesla with a good safety before you’re allowed to access autopilot

2. I’ve seen 2 types of customers use autopilot. One is rich dudes, who buy it just to have all bells and whistles. They use it less than once a month and honestly it’s a waste of money for them. The other are passionate early adopters, they regularly make YouTube videos, constantly stress test our tech and are huge contributors of our tech itself. I’d say 10-20% of our users are the latter. The kind of group who don’t use autopilot? The regular old Joe, who perhaps would like Autopilot for some practical use case. It’s too expensive, has a lot of restrictions (like you have to grab the steering once every few minutes or autopilot disengages and then you get banned out of it even if you pay) that it doesn’t make sense for him to buy this tech anyway. In essence, this is not tech that is being used by regular people who have a chance of misusing it. Ever since the Uber self driving crash, heads roll if a self driving car crashes and as an engineer I don’t get any access to Tesla legal but it’s my understanding that in none of the cases filed against Tesla, did they prove Autopilot was active (forget Autopilot being the cause).

So yes, we’re not building new tech that’s killing hundreds, no one has any ethical dilemmas here. We’re building tech that a passionate group of users really want to see succeed and help us do that, and rest of the users just give us money for some reason even though they don’t really use it or trust it. I would frankly be more torn about working in a place like Waymo where the user has almost no control over the car (they don’t even sit at the steering wheel), and they have to solve the problem one shot before releasing it to the public while Tesla can keep iterating step by step (with its passionate user base supporting us and showing love all the way)

Edit: Changed this comment to make it smaller

AussieWog93 · 3 years ago
That sounds amazing.

I have a robot vacuum cleaner with "AI" computer vision, and you can just tell that it doesn't see the world in the way a living being does. It'll detect and avoid shoes, cables and dog turds, but is completely blind to anything not in its specific library of objects (we have a two year old, it chokes all the time on things like ribbons and toys).

Being able to recognise objects that cause it to choke, even if it doesn't what they're called in English, is souch more significant to robotics than being able to label things.

nextworddev · 3 years ago
Yep. We came a long way from bounding boxes, not sure if community here is ready for vision + 3d + video models to have the “chatgpt” moment this year
ragazzina · 3 years ago
I wonder if this means no more stupid select-all-squares-with captchas.
phito · 3 years ago
Captcha will soon be useless
greeny7373 · 3 years ago
Yep Sam is really really good.

Already have use cases for that too!

mach1ne · 3 years ago
What does it mean to solve vision though?
kajecounterhack · 3 years ago
It's 2023 and tesla still can't reliably read traffic lights. Sounds like you know vision has a long way to go but I'd expect in the future it'll be harder to make that case. Maybe that's all that's meant.
davesque · 3 years ago
I feel like the whole ChatGPT bubble has really highlighted what feel like some fundamental shortcomings in the worldview that is represented in tech. That is, there seems to be a winner take all dynamic baked into the tech world. Maybe this arises from the simple fact that tech places a lot of power in the hands of individuals. But there's an emergent downside to this which is that it makes those who were already much more powerful even more so. Because who is best suited to take advantage of all the power but those who were already perched up above everyone else and permitted to pick and choose their opportunities?

I've found it exceptionally hard to stay positive about all of this. It almost feels as though the advent of LLMs has shined light on a fundamental law of the universe that does not work out in the little person's favor. It's like survival of the fittest on steroids. Guys, what the heck are we doing??

sj8822 · 3 years ago
At some point, I became disenchanted with tech and working as a software engineer when I realized that tech doesn't ethically fit into our primitive economic organization right now. It's made me feel bad about myself and the career path I've chosen.

Basically, with tech, you can build a product with a very limited amount of people and resources with a potentially infinite customer base. And this means that this product can suck up money from the whole world and redirect it into the hands of very few. At its essence, this is the case. With tech, this phenomena is exacerbated to the extreme compared to other industries where more physical resources and labor are required to scale production and distribution of the product, which means more wealth getting spread.

At its core, tech has the potential to exacerbate wealth inequality in mind-boggling proportions.

Our current economic system is not made for tech.

xyzzy4747 · 3 years ago
I think ultimately tech will result in humans being like the rest of animals - completely unable to participate in the economy. This will only start to happen once AGI is reached though which might only be a decade away or less.
maxilevi · 3 years ago
But if people are paying for the product then they value it more than the money right?

Tech might bring more inequality, but what is the problem with it if it substantially increases everyones standard of living?

rlt · 3 years ago
> you can build a product with a very limited amount of people and resources

If you can do it with limited resources, then so can anyone else.

(This is an oversimplification, of course)

JohnFen · 3 years ago
> there seems to be a winner take all dynamic baked into the tech world.

This has been a pathology in the computer/software industry for a very long time. It's never been actually true except in a couple of special cases, but the industry acts as if it is. That has led to all sorts of bizarre and undesirable things.

> Guys, what the heck are we doing??

I think we're playing with fire and, without extreme caution and careful consideration (which I'm not seeing much of), this could end very, very badly for both the industry and society.

I have always been optimistic about technology and society, but (perhaps like you), my optimism has largely evaporated over the last several weeks. I wish the future didn't look so dark. Perhaps, though, things will look less gloomy with time.

weatherlite · 3 years ago
What's your main concern with all of this?
hobo_in_library · 3 years ago
Pre-tech, the winner take all dynamic was prevented by physics:

- Customers were only willing to walk so far to go to a bakery

- A store could only stock so many varieties on it's shelves

Now, the cost of distributing goods across the globe, and the incremental cost of creating new goods has shrunk to virtually zero for digital goods, combined with people's general hesitance to the friction of switching providers, creates a the natural "winner take all" effect

The places where you'll see this NOT happening are areas where those costs are still significant for some reason.

Examples:

- Different ride sharing services are popular in different countries, many of them local ones, since foreign companies had a harder time getting distribution started.

- Legal regulations also have been hindering the spread of payment companies across country-level borders

api · 3 years ago
> the friction of switching providers

Without this the gravity is much less powerful. Seems like where to focus if you want to empower people against the winner take all monopoly effect.

Areas where switching costs are low are areas where you have standard APIs and where your data resides locally, is easy to move, or are not data intensive at all.

shadowpho · 3 years ago
> That is, there seems to be a winner take all dynamic baked into the tech world.

In tech world the cost for R&D/design/code is generally much larger then continuous code. That means that it's extremely expensive to have 1 user, but really cheap to have 1M users.

Thus winner-takes-all makes sense, as company who is ahead can continue developing/designing at a cheaper price as opposed to company with no users.

WanderPanda · 3 years ago
This is literally threatening the winner (Google) taking it all forever. Some of these innovations are winner take all until they aren't anymore. The cycle seems to be 1 to 2 decades. The profits during that cycle are probably fair on average given the risk necessary to make the tech ready for wide-spread usage
UncleOxidant · 3 years ago
> This is literally threatening the winner (Google) taking it all forever

Sure, but look who is over there in the corner next to OpenAI: Microsoft with their $10B stake. The company that was the winner prior to Google being a winner.

strikelaserclaw · 3 years ago
Well it seems like the biggest winner over google is Microsoft?
sebmellen · 3 years ago
This is nowhere near new: https://en.wikipedia.org/wiki/Matthew_effect

"To those who have everything, more will be given, and from those who have nothing, everything will be taken."

RHSman2 · 3 years ago
Human nature, not just tech
davesque · 3 years ago
Is it though? I remember feeling much more positive about society roughly 20 years ago.
newswasboring · 3 years ago
AI world can be proper dystopian these days. I know someone who accepted a job offer in one of the biggest market research firms in the world. She was making AI models for a high tech company before and was hired to make summarization AIs. Between serving out the (frankly ridiculous) 2 month notice period and joining this new job GPT-4 was launched and prices came down for 3.5. The original plan for building something internally was turned into feed everything to chatGPT and then ask it questions. A junior engineer whipped up this system in a couple of weeks. Its all just API calls after all. Now she spends entire days trying to ask the model the right questions so that it can generate the correct reports. Her entire job has been boiled down to talking to an AI. She is working with the most cutting edge technology, yet its so mundane that everyone is just bored on this project. This could have been a ironically tragic character in a Gibson novel.
zug_zug · 3 years ago
This is an outstanding phenomenon, not dystopian. Your friend was going to have to do the incredibly awful and wasteful task of rebuilding an incredibly complicated system.

However, fortunately, a much more optimal solution happened, where somebody build a superior and cheap solution (unfortunately not open yet) and let the whole world leverage it. Now your friend needs to waste very little energy deduplicating, and presumably has free time and energy to think about other problems that aren't solved.

newswasboring · 3 years ago
Indeed, a much more optimal way to do things has been found. But that's not what this is about. It's about the human experience. AI is creating value, but the day to day life for a problem solver has become a bit more mundane.
ilt · 3 years ago
Could be ideal for next season of Bored to Death.
woeirua · 3 years ago
I think what’s really depressing here is just how effective scaling seems to be. It just means that any company that’s not willing to pour hundreds of millions of dollars into their AI programs isn’t serious at all and would probably be better off hiring engineers to figure out how to integrate GPTX into their systems than trying to roll their own. I really think we’re going to see a massive collapse of AI/data science jobs once it becomes clear that no in house model is ever going to be better than the zero shot performance of these mega models.
Buttons840 · 3 years ago
My understanding is that transformers are now favored over RNNs because they parallelize better.

It's hard to imagine, but I wonder if there's some non-parallelizable machine learning algorithms which might outperform these massive models? It seems improbable, but it's a small hope I've had. The greatest intellects were aware of (ourselves) do not scale very well, and maybe the same will ultimately apply to AI?

machiaweliczny · 3 years ago
I remember seeing some theoretical analysis that compared computing differences between transformers, LSTMs and RNNs and I think that RNNs are theoretically better (can learn more complex functions). Can't find it now.
rlt · 3 years ago
I wonder if there’s an incentive for a large group of companies to fund open source models, sort of like Linux.
chopete3 · 3 years ago
At least, people working on ML models that handle these tasks must be feeling terrible. They know their models will be abandoned sooner or later and composed on top an LLM.

1. Classification

2. Named Entity Recognition (NER)

3. Dialog Engine

4. Sentiment Analysis

5. Tone Analysis

6. Language Translation

7. Summarization

8. Tokenization

9. Simple NLP Tasks (part-of-speech tagging, dependency parsing, lemmatization, morphological analysis)

10. Sentence Segmentation

11. Content Parsing

12. Question Answering (Structured & Unstructured)

13. Similarity

14. Grammar Correction

15. Speech to Text (ASR)

16. Text to Speech (TTS)

17. OCR

18. Image Recognition

19. Text Test Data Generation

lysecret · 3 years ago
Couldn't agree more. I see a lot of parallels to the move to cloud. Yes, some big comps will still have an AI engineering team and build their own models but the vast majority will move to an API. The same way the vast majority used to have a sys admin type who set up and maintained a local server who then moved to the cloud. Of course the SuperStar AI engineers will now find much more interesting work at the big comps, just like there are super interesting challanges building the cloud for everyone.
espe · 3 years ago
actually that might not be the case. don't underestimate the value of older, better understood and much smaller models. also, why not call bert-style (encoder) models LLMs as well. i would expect last-gen models to give us an edge in controlling the effects of the latest ones (cf. the alignment discussion).
chopete3 · 3 years ago
BERT models are also LLMs. I referred to LLM more as an API based access, hosted by Microsoft, Google or AWS for large scale isolated/production consumption, like RDS (MySQL, Postgres).

There will always be custom models, with controlled training data and specific use cases.

lacker · 3 years ago
I think this depends a lot on where you are working.

I've talked to academics who are getting discouraged that they don't see how their approach to AI is going to be possible any more, with so much funding going toward the largest models from industry. On the other hand I've talked to startup founders building AI products whose business is booming because ChatGPT brought so much attention to the entire space.

We are certainly living in interesting times ;-)

karmakurtisaani · 3 years ago
A PhD student I recently talked to was complaining how in academia they will never have the resources it takes to train models like GPT. And how now some academics are only researching the input/outputs to these LLMs. Seems like it's pretty dire or at least uninspiring to all but very few at the moment.
tayo42 · 3 years ago
why is that? don't like those academic particle researchers get tons of money to build wild experiments to measure waves or collide particles? unless i get the funding part wrong? are those not academic projects?
ChatGTP · 3 years ago
I've talked to academics who are getting discouraged that they don't see how their approach to AI is going to be possible any more

Welcome to how the entire world probably feels, but way worse because they haven't been getting 6-7 figure salaries for the last ten years to insulate themselves for what big tech wants to see happen.

I think it's a positive thing that AI researchers are feeling this way because it might be the only thing that slows things down or at least shows us a little more empathy.

I'm not counting on it, but yeah, all I can say is, get used to it. I'm basically just leaning on trying to enjoy things that aren't in front of me, kind of like being more mindful while we're in free fall, while simultaneous fretting and praying for future generations and hoping they have a future.