Readit News logoReadit News
typest · 3 years ago
The other day, I was trying to build a web scraper, but I kept getting an error that the data wasn't utf-8 encoded. I gave chatgpt the error along with the first 10 bytes (e.g. b'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03') and it informed me that this was a gzip-encoded binary file, and then it wrote me working code to decompress the file.

This is extremely valuable and saved me hours/was the difference between me stopping or not.

There will surely be people that overhype this, but there is real value as well.

thefilmore · 3 years ago
The first result of searching for your string on Google is a StackOverflow link [1] that mentions that this is a gzip header. There is another link [2] below it showing how to decompress it.

[1] https://stackoverflow.com/questions/58552645/what-exactly-is...

[2] https://stackoverflow.com/questions/57328953/trying-to-decod...

timr · 3 years ago
This strikes me as comically similar to patterns I see in tech: people not knowing what came before, and fixating on overly complex new solutions to problems that were trivially solved with prior tools.

I fully expect a node.js server in a docker container to query GPT for error messages found in logs. They'll give it a cutesy name like ChattyDev, and before you know it, every coding school will be pumping out coders who think it's an essential development tool, and HN threads filled with people who will defend the practice to the death, referring to examples such as left-pad for prior art.

"yeah, well...people made fun of left-pad, but we see how it has completely automated the padding of strings, removing one more chance for error!"

noobermin · 3 years ago
I hate to say it, but this is exhibit A in the "AI will just make some developers even more lazy than SO did"
rom-antics · 3 years ago
To be fair, you could google "1f 8b" or even that entire hex string and Google will tell you the same thing.
kilgnad · 3 years ago
Wow. Pretty amazing that chatGPT knows how to use Google.
riku_iki · 3 years ago
the difference is that with google you can trace the source of the statement, and with chatgtp they only option is to believe.
elil17 · 3 years ago
Bubbles can be based in real value.

The dotcom bubble was a bubble because there were a lot of junk, overvalued companies. Doesn't mean websites aren't useful.

kilgnad · 3 years ago
I know what you mean but the word "bubble" implies no value. A bubble is mostly empty space of no value, the surface of the bubble hides the emptiness.

When the bubble pop as it inevitably does the emptiness is exposed.

The meaning you're going for here is that although there's a "bubble" it's likely a smaller bubble, not a large bubble like the dotcom bubble or the housing bubble.

lm28469 · 3 years ago
It's nice and all but simply putting "b'\x1f\x8b\x08\x00\x00\x00\x00\" in google search gives you the same answer (first link even)
testcase_delta · 3 years ago
I started out reading the article thinking the author was going to say that it's all vaporware, but he doesn't at all.
neom · 3 years ago
Tom Scotts video today re where we are on the AI's impact to society curve was interesting : https://www.youtube.com/watch?v=jPhJbKBuNnA&ab_channel=TomSc...

I think he's probably right that we're very early on the adoption curve.

(even though I always get downvote and call a dumbass when I mention it here, I still think progress in quantum computing (predictive) and AI go hand in hand towards AGI.)

ksec · 3 years ago
It is what Nvidia's boss Jensen Huang calls [1] The iPhone moment of AI.

Basically every AI ( so to speak ) before ChatGPT is like Smartphone 1.0. Blackberry, Sony Ericsson, Nokia Symbian etc. ChatGPT is the Smartphone 2.0, aka iPhone era.

It look iPhone many years to reach mass adoption. MKBHD, as tech savvy as he is only had his first Smartphone in iPhone 4S era. ( If I remember correctly ). And then Phablet ( Any Screen larger than 5" ) came. I think we are at the end of that curve, 15 years after the iPhone introduction.

I think ChatGPT will follow a similar path. And if it was iPhone that pushed TSMC to sustain our current semiconductor improvement for the past ~10 years, then it will be ChatGPT that pushes us towards 1A ( 1nm ) node for another decade.

[1]https://wccftech.com/nvidia-ceo-calls-chatgpt-as-one-of-the-...

ryloric · 3 years ago
Unrelated, I've never seen this dude without a leather jacket on.
adamsmith143 · 3 years ago
>(even though I always get downvote and call a dumbass when I mention it here, I still think progress in quantum computing (predictive) and AI go hand in hand towards AGI.)

You get downvoted because it's not a coherent position. It's like saying I'm really bullish on renewable energy powered by web3 blockchains. It's just a mishmash of buzzwords. The reality is Quantum Computing has limited use cases in general computing and AI isn't really one of them. I don't think there's even an argument that e.g. a quantum chip offers any real advantage over a TPU or even GPU for AI tasks.

kypro · 3 years ago
I've heard some people cite quantum physics to explain free will and consciousness, although these arguments always seemed completely unsupported and a form of "god of the gaps" reasoning to me.

But being charitable I suppose if you subscribe to the idea that creativity and consciousness arise from quantum behaviours then perhaps it makes sense. It would suggest we likely have the architecture of an AGI completely wrong currently though, so I'm not sure how this really relates to AIs like ChatGPT.

noobermin · 3 years ago
The downvotes on QC are likely somewhat understandable. Unlike chatgpt there are no results that give hope that QC will look like "computing" any time soon unless there is a breakthrough on the horizon.
luplex · 3 years ago
AGI doesn't need quantum computing, but I'm sure it will be useful when we finally figure it out.

Maybe we need AI to figure out quantum computing.

monkeynotes · 3 years ago
Here's my killer app theory for stuff like GPT, just one example...

MS Teams gets the GPT treatment. It watches all your chats, email, calendars, code, wiki, meetings, spreadsheets, documents. Where is Slack now?

If you want to know absolutely anything from engineering domain knowledge to product strategy through to aiding you in sales, Slack has no answer. Ask Teams to splat you a database table, refine an SQL statement, brief you on a meeting, remind me of who a particular customer is and what sort of sales pitch would appeal to them.

I mean, not seeing the potential in GPT is really being intentionally blind to world changing technology. Just the fact that it can scan your whole codebase, find potential security holes, suggest performance blind-spots and indeed write code, or at least suggest code, this alone is such a big change it's hard to get your head around the opportunity. All of that in a chat window or IDE. It's revolutionary.

kypro · 3 years ago
I was saying the same thing to a colleague a couple of weeks back. The power of ChatGPT will become obvious when it's sucking in all of an orgs data.

At that point instead of your boss asking you to send an email to someone or asking the data team to pull some stat, they'll just ask the chatbot to do it for them.

My guess is 80%+ of the work most people in corporate jobs do could fairly easily be automated with the next generation of GPT being fully integrated with an organisations data and tools.

It's so obvious the power of this technology. Those saying ChatGPT is still making a few programming errors with their crappy prompts are missing the point. Wait until a slightly more advanced version of GPT has access to your dev documentation + all your repos + your jira ticket board + your dev environment.

You won't even need to ask it to do anything. Your boss is going to quickly wonder why they need a team of 20 devs when a team of 2 devs reviewing ChatGPT pull requests is 10x more efficient.

mlsu · 3 years ago
If you thought tech debt was bad with human programmers...
somewhereoutth · 3 years ago
> Wait until a slightly more advanced version

Yes, and therein lies the issue - that last crucial 1%, 0.1% may be impossible to achieve (sort of like attaining lightspeed travel)

riku_iki · 3 years ago
> Wait until a slightly more advanced version of GPT has access to your dev documentation + all your repos + your jira ticket board + your dev environment.

why you are so confident chatgpt ever will be able to work as independent dev and won't hit the limit of it's abilities?

JieJie · 3 years ago
Or all twenty devs stay and eighty more get hired because DevGPT can do 100x more work in the same amount of time.
RhysU · 3 years ago
> The power of ChatGPT will become obvious when it's sucking in all of an orgs data.

I am having a hard time seeing this.

Intranet/corporate search has forever been awful in comparison to internet search.

rvz · 3 years ago
> I mean, not seeing the potential in GPT is really being intentionally blind to world changing technology. Just the fact that it can scan your whole codebase, find potential security holes, suggest performance blind-spots. All of that in a chat window or IDE. It's revolutionary.

Except that we are just left with outputs that are untrustworthy. All of these GPT products; ChatGPT, Copilot, Bard, Bing AI are still frequently hallucinating answers and often very incorrect solutions. We have already seen this with Copilot writing vulnerable code.

What this current AI hype cycle fails to realize is that given that you still cannot trust the generated output, it cannot be used safely in serious and highly regulated industries such as finance, law and medical professions all of which require trust and have been subject to AI disruption for years all with the same problem of trust being unsolved in AI. It is not enough to even disrupt search engines.

There is nothing new or revolutionary about a AI SaaS business with an API with a chatbot generating nonsense. I expect the hype around AI LLM chatbots to subside just like the hype around social spaces apps like Clubhouse did.

unraveller · 3 years ago
An anonymous clubhouse is back on the cards with realtime a.i. voice synth.

Trust in information is for people who outsource their every opinion, all they want to know is if it will keep them in high esteem for re-stating it and blinding following it. Well, for law, health, finance, that depends what year you got your opinion since the best information changes. Which is what we want if we want better.

A.I. output is just Words on a screen, it only promises coherence. How well a technology assists you is up to you or else we'd call it a torture device.

monkeynotes · 3 years ago
> outputs that are untrustworthy

Seems like this is an interesting engineering or product problem worth solving. History is littered with big problems that were solved. Go look at flight, within 50 years we went from "it's not possible", to it's too fragile, to jets, to international airports and mass transit never before possible.

If you work in tech I think you've slept through your life.

RandomLensman · 3 years ago
The real application is increasing bureaucracy massively. Nothing that doesn't get an opinion filed for or some "memo" attached to it. No request for comment ever unanswered, nothing that doesn't deserve some written specs or an risk assessment. No process not documented in lengthy prose etc.
monkeynotes · 3 years ago
Pretty cynical opinion.
swatcoder · 3 years ago
That won’t work until somebody designs a reliable security model for LLM’s.

In the real world, you can’t have a model that slurps up every bit of information in a company and then just lets anybody ask open ended questions about it.

But the security solutions for these technologies are far from maturity. They’re almost certainly addressable, but its going to be a whole industry in itself, will take years to take shape, and will probably involve underlying architectures that are designed very differently from what we see in this generation of models.

monkeynotes · 3 years ago
So you agree it's possible, desirable, and game changing? I think those are hugely positive indicators this will happen and people sitting comfortably in the status quo are going to lose out.

These corporate AI instances will take huge configuration, and like when IBM started adding infrastructure to large corps it was a huge effort. But the reward and advantage it provided was worth the millions of dollars and years of work. Eventually it all permeated down to consumers.

How long this will take it up for debate, but I don't think we can easily dismiss that it's inevitable.

poisonborz · 3 years ago
Imagine the prompt hacking possibilites, this time not to generate outputs, but to reveal private discourses. Imagine that when this hallucinates, you will have wrong impressions on your colleagues or the state of projects.
nrjames · 3 years ago
Incidentally, it's amazing how terrible MS Teams is at code blocks, both the creation of them and the copy/pasting/using the code from one somebody else created.
robotburrito · 3 years ago
Basically everyone I know is on either extreme. Either they say "It just makes stuff up and it's totally useless!" and some how not even admitting it's very cool, or acting like it's going to lead to all of us just being homeless and 1% of of the population that owns the AI having even more wealth.
monkeynotes · 3 years ago
The reality, as always, is somewhere in the middle
anonyfox · 3 years ago
sounds like a possible way to detect who will be on which end of the spectrum /s
crazysim · 3 years ago
You should see https://www.microsoft.com/en-us/microsoft-teams/premium .

It may not be what you describe but they are definitely flirting.

rco8786 · 3 years ago
yes, i've been trying to drive this home to folks around me also.

AI is only as good as the data it's learned from. There is little to no value in a really super awesome prompt. "Prompt engineering" is not a real thing. There is, however, enormous value in an AI that has learned on some specific set of data that nobody else has access to.

IOW, data is still the currency.

rvz · 3 years ago
> Just as we saw companies adding the suffix “dot com” to their names in the 90’s and announcing “blockchain” initiatives in 2017, so too will we now see an endless parade of AI announcements in 2023.

Correct. Everyone and their cats are now an AI company again. Hyping and parading about a hallucinating chatbot is going to change the world and take over search engines and kill Google. It won't. It needs to do more than that to even challenge Google.

We are already looking at its limitations and after looking at both Bing AI, ChatGPT and Google Bard, they all fall short at reliability and it is all fundamentally rooted to the black-box nature of neural networks.

The hype and mania will go on just as long as how Clubhouse was hyped on for in 2020 and like what happened to GPT-3 after that AI hype cycle died.

beoberha · 3 years ago
At least with the crypto bubble, VCs could easily drop their bags on the general public. Things are going to be interesting when all these GPT wrappers being billed as “AI companies” start to centralize (how many AI copy writers do we really need?). It’s going to be a race to the bottom and the real winners here are going to be GPU sellers.
smoldesu · 3 years ago
To that end, this weekend I wrote a GPT-Neo-1.3b based Discord bot, and ran it on an Oracle "Always Free" instance. It costs me nothing, and I'm able to play with a surprisingly large/competent AI model.

As the technology progresses and models are optimized/shrunk, I'm not sure if these "AI companies" will ever stand a chance. Even cheap Android smartphones can run the smallest GPT-Neo model, eventually the need for the SAAS wrappers for the technology will be cannibalized.

WalterSear · 3 years ago
You are assuming that copy writing, or even text generation is all that GPT-like AI is good for. These tasks just the scratch the surface of the technology's >existing< abilities. There are tranformers already capable of operating pretty much any desktop GUI.
beoberha · 3 years ago
No I just didn’t mention it. Mostly because I see companies like Google and Microsoft rolling anything remotely useful for businesses into their offerings.
rom-antics · 3 years ago
Bubble of... 2023? This has been going on for a while. Since at least 5 years founders have known the way to get funding is to claim your product has "machine learning" or "artificial intelligence"
kilgnad · 3 years ago
chatGPT is more then a bubble. It's a societal inflection point. The reaction to it is appropriate.

However as usual people will get too excited and the hype will outpace the actuality of the technology. There will be a slight "bubble" but this isn't anything like the "housing bubble".

Prior to chatGPT though, ai could be characterized as something along the lines of a housing bubble. I would say almost all lines of research in ai save llms are over hyped bubbles.

Not saying these lines of research are useless or inconsequential. Far from it. Ai outside of llms is amazing. But these Ais are definitely inside huge bubbles.

noobermin · 3 years ago
That's good because the OP didn't compare it to the housing bubble at all. He just said "bubble."
gowld · 3 years ago
The AI Bubble of 2023 is the Generative AI Bubble of 2023
Tepix · 3 years ago
But the pure AI companies like OpenAI and Tenstorrent haven't gone public yet.
somewhereoutth · 3 years ago
I believe that the crucial test of real AI will be if a group of such AIs invent their own language to allow them to cooperate - much like intelligent animal species such as ourselves. The richness of the language indicates the depth of intelligence.

Any such language would necessarily be limited to their 'domain of existence' - you can't invent words for colours if your world has no light. Thus we'd need to give the AIs a full domain of existence for a full AI. They would need eyes (and ears?) and locomotion so they have a real world to reason about and talk about to each other (and us?)

The point being that emergent language is the only general way to gauge intelligence (above and beyond the somewhat anachronistic Turing test). I also conjecture that human language (in fact any human language) is complete in the sense that any and everything can be described within it (e.g. you could explain General Relativity to any human from any time, using their own language, as the basic concepts are already present, you just build on them - whereas this might not be possible with bird calls).

Furthermore, if our language is indeed complete, then we could suggest that any intelligent alien species we might encounter will also have at best a similarly complete language - and thus cannot be meaningfully more intelligent than us, as there is no 'higher' language, no concepts inaccessible to us.

paulpauper · 3 years ago
Where was this guy warning of a crypto bubble in 2020-2021? Instead he was hyping crypto. If this is any indication, AI will probably continue to do well. I don't think GPT be as big as Google, but it's not going to pop either as he is expecting.