You may have issued such a license...
Though without an explicit sublicense from Y Combinator, they may have issues with this application:
> Except as expressly authorized by Y Combinator, you agree not to modify, copy, frame, scrape, rent, lease, loan, sell, distribute or create derivative works based on the Site or the Site Content, in whole or in part, except that the foregoing does not apply to your own User Content (as defined below) that you legally upload to the Site.
I'm guessing you're one of those people who thinks atheism means a belief in the absence of a god, rather than its actual meaning, which is an absence of a belief in a god.
In the crisis era of non-reproducible science and bad data collection is it any surprise the polls are wrong too?
[1] https://www.natesilver.net/p/the-model-exactly-predicted-the...
If you hover the mouse over the image you will see the actual prompt - which is a human, Christian, interpretation of the verse, not the actual verse.
I am disappointed.
(I say Christian because no Jew would ever give that prompt for "Expulsion from Eden".)
> OpenAI has another AI, GPT-3, that I used to generate many of the ideas for DALL·E prompts. I wanted to explore DALL·E using a wide variety of styles and artists, and I have limitations and biases when it comes to my knowledge of art history. GPT-3 cast a wider net of styles and artists than I would’ve come up with on my own.... The GPT-3 prompts I used evolved over time, but this one is emblematic:
> Suggest 5 unique concept ideas for a work of visual art inspired by Luke 14:7-11 (do not pick the place of honor) in the Bible. Include art direction and a specific medium and artist to emulate. Include artists from a variety of eras, styles, and media. Try for an unusual perspective. Title, year, medium. Description.
We couldn't trust the media already, Bloomberg will just be another example making this clearer.
> To compare a midrange pair on quality, the Bing Search vs. a Gemini 2.5 Flash comparison shows the LLM being 1/25th the price.
That is, 40x the price _per query_ on average (which is the unit of user interaction). LLMs with web-search will only multiply this value, as several queries are made behind the scenes for each user-query.
EDIT: thanks, zahlman, he does quote LLM prices in 1M tokens, or 1k user-queries, so the above concern is mistaken!