https://physics.stackexchange.com/questions/290/what-really-...
Apparently. Not that I know either way.
https://physics.stackexchange.com/questions/290/what-really-...
Apparently. Not that I know either way.
It says AI could be impactful to those fields. Modern chemistry is impactful to medicine but it didn't replace doctors.
So unfortunately the article this post linked to, while it has its own merits, starts off by citing a clickbait tweet and wildly misinterpreting the paper in the first sentence. Still, I hope people give both article and paper a generous reading. Even if the article starts off terribly it has interesting points which we shouldn't disregard just because of a lazy hook. The intermediate tweet, by contrast, is just lazy clickbait: half-truths and screencaps, the bread and butter of modern disinformation.
The sad thing is that it's easier than it's ever been to follow up on references. Even in this case, where the tweet itself provides no citations at all, I had to search for less than a minute to find the original paper.
You see all these examples like "I got ChatGPT to make a JS space invaders game!" and that's cool and all, but that's sort of missing a pretty crucial part: the beginning of a new project is almost always the easiest and most fun part of the project. Showing me a robot that can make a project that pretty much any intern could do isn't so impressive to me.
Show me a bot that can maintain a project over the course of months and update it based on the whims of a bunch of incompetent MBAs who scope creep a million new features and who don't actually know what they want, and I might start worrying. I don't know anything about the other careers so I can't speak to that, but I'd be pretty surprised if "Mathematician" is at severe risk as well.
Honestly, is there any reason for Microsoft to even be honest with this shit? Of course they want to make it look like their AI is so advanced because that makes them look better and their stock price might go up. If they're wrong, it's not like it matters, corporations in America are never honest.
The paper itself [1] doesn't say "replace" anywhere: the purpose was to measure where AI has an "impact". They even say (in the discussion)
It is tempting to conclude that occupations that have high overlap with activities AI performs will be automated and thus experience job or wage loss... This would be a mistake ... Take the example of ATMs, which ... led to an increase in the number of bank teller jobs as banks opened more branches at lower costs and tellers focused on more valuable relationship-building...
Ok, good. Something definitely seems amiss when a bunch of CS researchers are reporting that "mathematicians" are one of the most "replaceable" (good luck designing a new LLM without any knowledge of math).Overall this post says something about the sad state of twitter and search: I had to dig though quite a few articles which repeated this job replacement crap before I could even find the title of the article (which was then easy to find on arXiv). And go figure, the authors didn't mean to make the statement everyone says they made.
I'm not complaining about the package I got with the rental: like any packaged service you have to take the good with the bad. But when things are packaged, a lot of the bad wasn't up to the consumer.
* Absolutely never any beep or sound
* Direct controls, no "programs" (i.e. microwave has two knobs: power and time, etc.)
* No network connectivity of any kind (obviously)
With a strong brand identity and good marketing these would sell like sliced bread.
Maybe it's prophetic: authors saw the writing on the wall and decided a doctor is a glorified mechanic who works on the most boring machine around (which hasn't changed in 100k years). Or maybe authors just decide the space was better filled by an ex-space-ninja or similar.
Since then, a lot has changed, and now it is all based on cling ( https://root.cern/cling/ ), that originates from clang and llvm. cling is responsible generates the serialization / reflection of the classes needed within the ROOT framework.
It's kind of neat that it works. It's also a bit fidgety: the cannibalized code can cause issues (which, e.g. prevented C++11 adoption for a while in some experiments), and now CERN depends on bits of an old C++ compiler to read their data. Some may question the wisdom of making a multi-billion dollar dataset without a spec and dependent on internals of C++ classes (indeed experiments are slowly moving to formats with a clear spec), but for sure having a standard for reflection is better than the home-grown solution they rely on now.
[1]: https://indico.cern.ch/event/408139/contributions/979831/att...
then it then went away and generated a load of confidently incorrect total bullshit
"phd level" my backside