For example, here's the same app but packaged as a regular bash script:
https://gist.github.com/lwneal/a24ba363d9cc9f7a02282c3621afa...
For example, here's the same app but packaged as a regular bash script:
https://gist.github.com/lwneal/a24ba363d9cc9f7a02282c3621afa...
"OpenAI has also drawn criticism for many other safety and ethics issues related to the launches of ChatGPT and GPT-4, including regarding copyright issues, labor conditions for data annotators, and the susceptibility of their products to "jailbreaks" that allow users to bypass safety controls...
A different approach to signaling in the private sector comes from Anthropic, one of OpenAI's primary competitors. Anthropic's desire to be perceived as a company that values safety shines through across its communications, beginning from its tagline: "an AI safety and research company." A careful look at the company's decision-making reveals that this commitment goes beyond words."
[1] https://cset.georgetown.edu/publication/decoding-intentions/
OpenAI have the resources to also publish this as HTML. They chose not to.
They're not alone in this - most of the academic and research world, plus the concept of a "whitepaper" seems predicated on the idea of publishing PDFs.
Is this some stupid thing where human beings are expected to attach more prestige to information published in this way?
PDFs are a terrible way of publishing information in 2023:
- they render poorly on mobile devices, where many (most?) people do their reading
- they're hard to copy and paste information out of
- you can't link to headings within them (like HTML fragment links)
- you can't easily run them through translation tools like the one built into Chrome
The benefits of PDF I can see are:
1. Easier to print and get the exact expected output
2. You can save one file offline
3. Easier to author
I'm not arguing to replace PDFs with HTML (though I wouldn't miss them personally) - I'm saying publish documents as both!
Provide an HTML version and a PDF alternative for people who want it.
Am I missing something here? Why does the academic and research world stubbornly stick to such a hostile way of publishing their results?
This isn't necessarily still true: HTML content can stay up on the web forever and a pdf can change, but people still prefer to cite something that looks like a paper document.
Since a whitepaper is often meant to be cited, it's published as a pdf to take advantage of this preference.
The best approach is to publish a PDF for citation along with a public HTML demo, like https://jonbarron.info/mipnerf360/
Puzzled, he laid back down, but then he heard someone else yell out, "72!", followed by even more laughter.
"What's going on?" he asked his cellmate.
"Well, we've all heard every joke so many times, we've given them each a number to make it easier."
"Oh," he says, "can I try?"
"Sure, go ahead."
So, he yells out "102!" and the place goes nuts. People are whooping and laughing in a hysteria. He looks at his cellmate rolling on the ground laughing.
"Wow, good joke, huh?"
"Yeah! We ain't never heard that one before!"
However, that discounts the waste on the edges of the circular wafer, as well as the chip yield, which will both likely be worse for the larger chip [3]. But, assuming a generous 70% yield by area [4], one wafer's worth of H100s all packaged into GPUs and running full blast will use maybe 20 kilowatts, while the same wafer of A16s might use 3.6 kilowatts. Although in practice, the A16s will spend most of their time conserving battery power in your pocket, and even the H100s will spend some of their time idle.
TSMC is now producing over 14 million wafers per year. At most 1.2 million of those are on the 3nm node, and not all of that production goes to GPUs. But as an upper bound, if we imagine that all of TSMC's wafers could be filled up with nothing but H100 chips, and if all of those H100 chips were immediately put to use running AI 24/7, how much additional load could it put on the power grid every year?
The answer is, around 280 gigawatts, or if they were running 24/7 for a year, about 2500 terawatt-hours. That's about 10% of current world electricity consumption! So it's not completely implausible to imagine that a huge ramp-up in AI usage might have an effect on the electric grid.
*edit: This assumes we're talking about the Apple A16 (ie. the difference between phone chips and GPU chips). If we're talking about the Nvidia A16 (ie. the difference between current GPU chips and last node's GPU chips) see pclmulqdq's comment. ⠀
[1] https://nanoreview.net/en/soc/apple-a16-bionic
[2] https://www.techpowerup.com/gpu-specs/h100-pcie-80-gb.c3899
[3] https://news.ycombinator.com/item?id=24185108
[4] https://www.extremetech.com/computing/analyst-tsmc-hitting-5...
[5] https://www.tsmc.com/english/dedicatedFoundry/manufacturing/...
[6] https://www.wolframalpha.com/input?i=%2814+million%29+*+%282...*
Probably also canonical are Goodfellow's Deep Learning [2], Koller & Friedman's PGMs [3], the Krizhevsky ImageNet paper [4], the original GAN [5], and arguably also the AlphaGo paper [6] and the Atari DQN paper [7].
[1] https://aima.cs.berkeley.edu/
[2] https://www.deeplearningbook.org/
[3] https://www.amazon.com/Probabilistic-Graphical-Models-Princi...
[4] https://proceedings.neurips.cc/paper_files/paper/2012/file/c...
[5] https://arxiv.org/abs/1406.2661
https://m.youtube.com/watch?v=XDO8OYnmkNY&t=120s