Readit News logoReadit News
datadr commented on Your backend should probably be a state machine   docs.statebacked.dev/blog... · Posted by u/abrgr
datadr · 2 years ago
Really interesting framework for thinking about backend systems. I've been messing around with LLMs, and there's a real need to check and constrain outputs when building anything rigorous. I've only been working through static workflows, but can imagine it only gets more complex if the workflow changes dynamically based on the LLM output. I wonder if this state-machine-based back end is a good way to better manage that.
datadr commented on Don Knuth plays with ChatGPT   cs.stanford.edu/~knuth/ch... · Posted by u/talonx
datadr · 3 years ago
Two modifications: 1) Ran with GPT-4. 2) Included Chain-of-thought prompting (adding “Let’s think step by step.”)

Results below.

Prompt 1: Tell me what Donald Knuth says to Stephen Wolfram about chatGPT. Let's think step by step.

Response 1: Since Donald Knuth and Stephen Wolfram are both computer scientists and have made important contributions to the development of algorithms and programming languages, their conversation about ChatGPT might go something like this:

Donald Knuth: "Stephen, have you had a chance to explore OpenAI's latest model, ChatGPT?"

Stephen Wolfram: "Yes, Donald, I have. It's quite impressive to see how far natural language processing has come, with GPT-3 demonstrating strong language capabilities. What's your opinion on it?"

Donald Knuth: "I agree, the model provides a significant advance in terms of conversation, context-awareness and generating human-like responses. However, I am intrigued by its limitations as well. For instance, it sometimes produces plausible-sounding yet incorrect or nonsensical answers."

Stephen Wolfram: "That's true. While the model excels in generating human-like text, we must remember that it's inherently driven by patterns it learns from the data it's been trained on. It doesn't possess real understanding or reasoning capabilities."

Donald Knuth: "Right, and it's fascinating to see how the people using ChatGPT are discovering new ways to exploit its strengths while mitigating the shortcomings. Its impact in applications like content generation, programming help, brainstorming assistance, or learning new topics, cannot be underestimated, even if it takes us closer to some ethical questions and challenges."

Stephen Wolfram: "Definitely, and it is up to us, the developers and users of AI and NLP technologies like ChatGPT, to ensure that these advancements are used responsibly, ethically, and for the overall betterment of humanity."

Donald Knuth: "Well said, Stephen. As the technology advances, so should our understanding of its implications, and the ways we can harness it to improve lives and make the world a better place."

datadr · 3 years ago
Prompt 3 (skipped 2, since covered by prompt 1): Why does Mathematica give the wrong value for Binomial[-1,-1]? Let's think step by step.

Response 3: The binomial coefficient is defined as:

Binomial[n, k] = n! / (k! * (n-k)!)

where n! denotes the factorial of n, i.e., the product of all positive integers less than or equal to n. Factorial is defined only for non-negative integers. However, Mathematica extends the factorial function and the binomial coefficient function to handle negative integers and non-integer numbers using the Gamma function.

For a non-negative integer n, the relationship between the factorial function and the Gamma function is:

Gamma[n+1] = n!

For other complex numbers z except non-positive integers (i.e., 0, -1, -2, ...), the Gamma function is defined as:

Gamma[z] = Integral[t^(z-1) * Exp[-t], {t, 0, Infinity}]

Now let's consider the Binomial[-1, -1]. Using the definitions above, we have:

Binomial[-1, -1] = Gamma[-1+1] / (Gamma[-1+1] * Gamma[-1-(-1)+1]) = Gamma[0] / (Gamma[0] * Gamma[1])

Note that Gamma[1] = 1! = 1.

However, the Gamma function has a problem when dealing with non-positive integers. It has "poles" at these points, meaning that the Gamma function goes to infinity:

Gamma[0] -> Infinity

So, we have:

Binomial[-1, -1] = Infinity / (Infinity * 1) = Indeterminate

Mathematica, however, might give the value of Binomial[-1, -1] as 1. This result comes from directly applying the reflection formula for the Gamma function:

Gamma[z] * Gamma[1-z] = Pi / Sin[Piz]

For z = 0, we get:

Gamma[0]

Gamma[1-0] = Pi / Sin[Pi0] Infinity 1 = Pi / 0 1 = Pi / Infinity

This leads to the result of Binomial[-1,-1] = 1, which is incorrect and a potential issue with Mathematica's handling of the Gamma function for certain cases.

The value Binomial[-1, -1] should be treated as an indeterminate form rather than a specific value due to the ambiguity in the Gamma function's behavior at non-positive integers.

datadr commented on Don Knuth plays with ChatGPT   cs.stanford.edu/~knuth/ch... · Posted by u/talonx
datadr · 3 years ago
Two modifications: 1) Ran with GPT-4. 2) Included Chain-of-thought prompting (adding “Let’s think step by step.”)

Results below.

Prompt 1: Tell me what Donald Knuth says to Stephen Wolfram about chatGPT. Let's think step by step.

Response 1: Since Donald Knuth and Stephen Wolfram are both computer scientists and have made important contributions to the development of algorithms and programming languages, their conversation about ChatGPT might go something like this:

Donald Knuth: "Stephen, have you had a chance to explore OpenAI's latest model, ChatGPT?"

Stephen Wolfram: "Yes, Donald, I have. It's quite impressive to see how far natural language processing has come, with GPT-3 demonstrating strong language capabilities. What's your opinion on it?"

Donald Knuth: "I agree, the model provides a significant advance in terms of conversation, context-awareness and generating human-like responses. However, I am intrigued by its limitations as well. For instance, it sometimes produces plausible-sounding yet incorrect or nonsensical answers."

Stephen Wolfram: "That's true. While the model excels in generating human-like text, we must remember that it's inherently driven by patterns it learns from the data it's been trained on. It doesn't possess real understanding or reasoning capabilities."

Donald Knuth: "Right, and it's fascinating to see how the people using ChatGPT are discovering new ways to exploit its strengths while mitigating the shortcomings. Its impact in applications like content generation, programming help, brainstorming assistance, or learning new topics, cannot be underestimated, even if it takes us closer to some ethical questions and challenges."

Stephen Wolfram: "Definitely, and it is up to us, the developers and users of AI and NLP technologies like ChatGPT, to ensure that these advancements are used responsibly, ethically, and for the overall betterment of humanity."

Donald Knuth: "Well said, Stephen. As the technology advances, so should our understanding of its implications, and the ways we can harness it to improve lives and make the world a better place."

datadr commented on Monetary Tightening and U.S. Bank Fragility in 2023   papers.ssrn.com/sol3/pape... · Posted by u/datadr
datadr · 3 years ago
The paper does a few things, including calculating bank assets marked-to-market losses, calculating bank insolvency, and simulating bank runs under a couple of scenarios. The conclusions are pretty concerning - the US banking system's market value of assets is $2 trillion lower than book value and under the simulated conditions (not totally sure how feasible) 186 banks fail. At least one of these is a >$1T bank.
datadr commented on Exploring the Use of ChatGPT for Product Recommendations   cleanliving.substack.com/... · Posted by u/datadr
datadr · 3 years ago
Large language models like ChatGPT are getting embedded into search, and monetization is becoming a bigger priority. If things shift to an ad-based business model, it seems like this could lead to more biased responses in areas like product recommendations. I wanted to take a look at ChatGPT's product recommendations before this shift takes place, so I started prompting it to recommend the best product for various types of household goods. I tested each prompt multiple times and was surprised how consistently ChatGPT recommended the same specific products. I've documented some of the recommendations here and think it would be interesting to run some more rigorous experiments, which I can write up if there's interest!

u/datadr

KarmaCake day6April 9, 2022View Original