Readit News logoReadit News
qlk1123 commented on I'm Too Old   amazingcto.com/im-too-old... · Posted by u/KingOfCoders
KingOfCoders · a year ago
(author here)

As I wrote

"AI is amazing. I’ve trained an AI to detect model railroad locomotives and their types. I’m a daily user of AI to let it write code for me."

But AI will take away all coding. Sure, some people will write code, like some people today use a mechanical typewriter. Like some artists use clay. But most of what happens in computers in the future will not be code but executed, self modifying, self optimizing, AI models.

"more and more time to think of what I want build"

This is not how AI will work. It's not I want something to build, the AI will just do the things. There will no longer be things to "build". We will no longer think of code as something that exists, but things that just happen.

You will say "But I could tell the AI what film to create, and some scenes, and a rough story, and it will create that film" - but what if AI creates much better, more powerful, exciting, better films than those you could imagine? Films no human ever thought about?

Again, like the typewriter, some people will tell the AI fragments of a story to create a film. But most media content will be created by AI, for consumptions, on the initiative of AIs, not on the initiative of humans.

In the mid 90s I wrote a philosophy paper at university about an AI that generates random images (trigger by my first digital camera, a Olympus C-800L) and then interprets them (making some estimations on the speed it could do that, generation and interpretation). That AI has basically seen it all, an alien killing JFK, me on the moon, you and me drinking a beer, and things we could never imagine.

[Edit] Like there people writing new games for 8bit computers today. They exist, but it's a niche.

qlk1123 · a year ago
IF (a big if!) all your forecast will become true, I don't think what people will mourn most at the time is losing the fun part of coding/developing SW.

Like those laborers who went jobless after the waves of industrial evolution, they should have been planning earlier for other jobs and skills, rather than focusing solely on the fulfillment from making goods.

qlk1123 commented on If you had no concern about market fit and funding, what would you work on?    · Posted by u/goksankobe
tikkun · a year ago
> it should significantly enhance the human civilization in one or multiple domains (so things like rendering Mandelbrot set infinitely faster does not cut it IMO

I feel compelled to note that "greatness cannot be planned" [1] and often things that significant enhance human civilization didn't seem like that at first.

[1]: https://www.youtube.com/watch?v=dKazBM3b74I

qlk1123 · a year ago
Thanks for sharing.

This presentation is pretty inspiring to me, but at the same time there is just no obvious way to leverage the claim. How can any management allow subordinates to do things without any objectives to justify?

Generally I buy the reasoning, but maybe that's just I cannot identify any fundamental flaws now.

qlk1123 commented on Ask HN: What Are You Learning?    · Posted by u/velyan
qlk1123 · a year ago
(Edit for fixing the format)

Surprised to know nobody mentions reinforcement learning here.

Bought three books (in their transitional Chinese edition), whose original titles are,

* Reinforcement Learning 2nd, Richard S. Sutton & Andrew G. Barto

* Deep Reinforcement Learning in Action, Alexander Zai & Brandon Brown

* AlphaZero 深層学習・強化学習・探索 人工知能プログラミング実践入門, 布留川英一

None of them teaches you how to apply RL libraries. The first is a text book and mentions nothing about how to use frameworks at all. The last two are more practice oriented, but the examples are both too trivial, compared to a full boardgame, even the rule set is simple for humans.

Since my goal is eventually to conquer a boardgame with an RL agent that is trained at home (hopefully), I would say that the 3rd book is the most helpful one.

But so far my progress has been stuck for a while, because obviously I can only keep trying the hyperparameters and network architecture to find what the best ones for the game are. I kind of "went back" to the supervised learning practice in which I generated a lot of random play record, and them let the NN model at least learn some patterns out of it. Still trying...

qlk1123 commented on LLMs use a surprisingly simple mechanism to retrieve some stored knowledge   news.mit.edu/2024/large-l... · Posted by u/CharlesW
wongarsu · a year ago
The opposite is also exciting: build a loss function that punishes models for storing knowledge. One of the issues of current models is that they seem to favor lookup over reasoning. If we can punish models (during training) for remembering that might cause them to become better at inference and logic instead.
qlk1123 · a year ago
I believe it will add some spice to the model, but you shouldn't go too far at that direction. Any social system has a rule set, which has to be learnt and remembered, not infered.

Two exmaples. (1) grammars in natural languages. You can just see in another commenter here uses "a local maxima", and then how people react to that. I didn't even notice becuase English grammar has never been native to me. (2) Mostly, prepositions between two languages, no matter how close they are, don't have a direct mapping. The learner just has to remember it.

qlk1123 commented on The baffling intelligence of a single cell: The story of E. coli chemotaxis   jsomers.net/e-coli-chemot... · Posted by u/jsomers
patcon · a year ago
> But I don't think of myself as a CAS or talk about We. Wecellfs?

I am a collector of theories of consciousness :) assuming your quote above is making reference to the "scale" at which "self" is understood, you might be interested in this theory:

Information Closure Theory of Consciousness (2020) https://www.researchgate.net/publication/342956066_Informati...

This reddit comment sums it up better than the paper seems to be able to: https://www.reddit.com/r/MachineLearning/comments/dco3t1/com...

> Consciousness (at least, consciousness(es) that we are familiar with) seems to occur at a certain scale. Conscious states doesn't seem to significantly covary with noisy schocastic activities of individual cells and such; rather it seems to covary at with macro-level patterns and activities emerging from a population of neurons and stuffs. We are not aware of how we precisely process information (like segmenting images, detecting faces, recognizing speeches), or perform actions (like precise motor controls and everything). We are aware of things at a much higher scale. However, consciousness doesn't seem to exist at an overly macro-level scale either (like, for example, we won't think that USA is conscious).

qlk1123 · a year ago
Thanks for sharing the interesting summary.

However I would like to mention that sometimes we do think so, as in "the will of the party", at least in some language's context.

Fun fact, when I tried to find similar sentence like "the will of Democratic/Republican Party", google returns 5 results for the former but followed by voters/members and thus not what I want, for the latter, there is no results at all. But as I find "the will of the party", I find an abstract of some paper from my area.

Maybe party is too small for this. It seems like "the will of the nation" is widely used.

qlk1123 commented on Devin: AI Software Engineer   cognition-labs.com/blog... · Posted by u/neural_thing
asteroidz · a year ago
> 0.2 or less

I find that questionable. What does "software engineering in a business environment" require that a competent competitive programmer couldn't also learn?

qlk1123 · a year ago
Specification. For any real business, it takes huge effort for a group of people across many domains to consolidate what should be done. That's only the what part.

Not saying competitive programming contest easy or something, but just pointing out that in a contest with timing constraint, the requirement realization phase cannot be fitted in.

Another analogy: martial art vs. military.

qlk1123 commented on Ask HN: Do you get lazy during burnout?    · Posted by u/bosch_mind
giantg2 · 2 years ago
Swapping teams can sometimes help for a little bit. But within a couple months it's back to the same burnout for me.
qlk1123 · 2 years ago
This is the same for me. My case was internal transferring from a engineering division with solid background to a newly created division of a different domain. Things turned out to be very different from what I had previous imagined.

At least now I know that, at the end of the day, burnout can only be fixed by other means.

qlk1123 commented on I recorded a screen capture of a task. Gemini generated code to replicate it   twitter.com/DynamicWebPai... · Posted by u/Michelangelo11
tomatohs · 2 years ago
We're building an AI Agent that can perform manual testing on feature branches [1]. I can tell you, it works, and it's going to get better, and it's going to happen fast. It's not hard at all for an AI to read text on the screen and click it.

What's amazing is the social impact this has - often people don't believe it's real. It feels like when I had to explain to my parents that in my online multiplayer game, that the other characters were other kids at home on their own computers.

I think it's a matter of denial. Yes, software is made for humans and we will always need to validate that humans can use that software. But should a human really be required to manually test every PR in 10k person teams?

Again, as a founder of an AI Agent for E2E testing, we work with this every day. If I was a QA professional right now, I would watch the space closely in the next 6 months. The other option is to specialize in the emotional human part like in gaming. You can't test for "fun."

1. https://testdriver.ai. Demo: https://www.youtube.com/watch?v=HZQxgQ1jt4g

qlk1123 · 2 years ago
> You can't test for "fun."

Sounds intuitive, but there are gaming researches working on that regard. Two related terms (learnt from IEEE Conference of Games) that come to mind:

1. Game refinement theory. The inventors of this theory see games as if they were evolving species, so this is to describe how game became more interesting, more challenging, more "refined". Personally I don't buy that theory because the series of papers had only a limited number of examples and it is questionable how related statistics were generated (especially the repeatedly occured baselines Go and Mahjong), but nonetheless there is theory on that.

2. Deep Player Behaviory Modeling (DPBM): This is the more interesting one. Game developers want their game to be automatically testable, but the agents are often not ready or not true enough. Says AlphaZero for Go or AlphaStar for StarCraft II, they are impressive ones but super-human, so the agnet's behavior give us little insight on how the quality of the game is and how to further improve the game. With DPBM, the signature of real human play can be captured and reproduced by agents, and thus auto-play testing is possible. Balance, fairness, engagement, etc. can then be used as the indirect keys to reassemble "fun."

u/qlk1123

KarmaCake day308October 22, 2018View Original