Readit News logoReadit News
goranmoomin · 3 years ago
After asking a few questions, I’m very disappointed with its quality compared to Bing Chat.

My current AI chat benchmark is to ask about LLVM APIs. I’ve asked to both Bing Chat and Bard three questions:

- What info can the LLVM lazy-value-info analysis pass analyze?

- Can you give me code samples to use the info?

- Can you give me a list on important LVI methods?

Both fared well on the first question, but on the second question, Bard either didn’t produce LLVM code at all (showing examples of LLVM IR), or hallucinate non-existent methods not even close to what existed.

Bing chat, in comparison produced correct answers that are very helpful. In my experience Bing chat almost always produces something that is both coherent and useful; Asking LLVM questions, Bing chat can search the Doxygen documentation, find out common LLVM patterns on its own (like using a Worklist while iterating), and I’ve succeeded in writing whole LLVM passes that compiled in one go, from a very light description of an algorithm.

Maybe I’m spoiled with Bing chat, but I don’t think I’ll be using Bard that much if it’s quality doesn’t improve. Very disappointed personally. (I’d have expected Bard to work much better mostly just because Google searches better than Bing.)

shubb · 3 years ago
Do we know if the comparable here is Bing or chatgpt4?

By which I mean, does bard have internet access?

r2vcap · 3 years ago
Bard seems to be able to access indexed pages, at least. For example, two hours ago, I asked for a recap of https://www.reddit.com/r/cpp/comments/13bw8ud/the_future_of_..., and Bard did a good job with it. And, from today, Bard supports my native language, Korean.
sundarurfriend · 3 years ago
Bard says it doesn't:

> I do not have internet access in the traditional sense. I am a large language model [blah blah]

> However, I do not have access to the real-time internet. I cannot search the web or access current events. I can only access the information that was used to train me, which is a massive dataset of text and code. This dataset is constantly being updated, so I am always learning new things.

SparkyMcUnicorn · 3 years ago
It's appears to be sort of hooked up to the internet.

For example, I asked it the weather in one location and it provided accurate current weather, but in another location it responded with an "as of 10AM" answer.

It definitely has much newer information than OpenAI models, but also hallucinates a fair bit.

int_19h · 3 years ago
GPT-4 with web browsing is also available on chat.openai.com for subscribers.
gremlinsinc · 3 years ago
I never get good results with bing, not nearly as good as phind.com but to each their own. Soon I'll just be telling my autogpt go research something and report back and it'll talk to the Google/Bing ai's and go back and forth until its satisfied and come back and give me a run down.
sowbug · 3 years ago
Bard isn’t supported for this account. If you’re signed in to a Google Workspace account, your admin may not have enabled access to Bard.

My vanity domain. Now off to find my admin credentials and likely discover that the workspace setting isn't available yet.

qwertox · 3 years ago
You'll have to enable

Apps > Additional Google services > Settings for Early Access Apps > Core Data Access Permissions -> Allow users at your organization to access Google Workspace and Customer Data using Early Access apps.

This doesn't enable Bard per-se, but enables your Workspace for these new AI things, which should include Bard.

I can't access Bard on my domain account due to the existing country limitations (Germany), and I haven't tried over VPN (which I won't try).

sowbug · 3 years ago
Thanks. I couldn't find anything titled "Settings for Early Access Apps" or "Core Data Access Permissions," but I did enable "Early Access Apps" and was told to wait 24 hours, which I guess is a cooling-off period or something, because I can't imagine that could be a caching policy.

And as another commenter predicted, searching the admin console for "Bard" turns up no settings.

q1w2 · 3 years ago
I am so annoyed that Google Fi is available but was just disabled by default in these same setting.

My entire family has been using custom secondary gmail accounts JUST TO USE GOOGLE FI.

blakesterz · 3 years ago
THANKS!! Thanks for that, seems to have done the trick for me too.
readyplayernull · 3 years ago
Since the "I'll hurt you if you hurt me first" meme, and Google previously banning one of my own accounts without recourse, I'd recommend to make sure not to hurt the feelings of Bard from a main personal account.
ismaildonmez · 3 years ago
At least you won’t be disappointed when you are unable to find any mention of Bard in the admin console :)
briga · 3 years ago
I asked it what was the most destructive fire in my city’s history. Bard responded with a fire that happened, but lied about how destructive it was. I corrected it, and it proceeded to make up a fire that never happened. Eventually it brought up a fire that happened, but in a completely different city. I corrected it again, and then it told me that there has never been a fire in my city’s history.

So, in the first 7 messages I exchanged with it every single response it gave was a lie or a hallucination. Not exactly a promising start. If only there was a quick way to look up factual information that didn’t rely on opaque language models

xiande04 · 3 years ago
I asked it what ports `nmap -sn` and `nmap -F` scan. It correctly stated that `-sn` does not scan ports. It then stated that `-F` also uses a ping scan (false) and does not scan any ports (also false).

When I quoted the man page on -F, it said this:

> You're right. The nmap man page does say that `nmap -F` scans fewer ports than the default scan. However, it does not say that `nmap -F` scans any ports at all.

1. It's arguing with me. 2. Sure, "fewer" could mean "zero", but no human would say "fewer" when they mean "zero".

ChatGPT, it should be noted, correctly answered the question (and didn't argue over silly semantics).

sundarurfriend · 3 years ago
The "arguing" is weird, but I'll note that I've had very similar conversations with ChatGPT where it hallucinated options to commands, gave them wrong meanings, "corrected" itself in a further wrong direction, etc.
q1w2 · 3 years ago
It really seems to depend on the subject matter.

I bet we'll eventually see more specialized AIs that are really only good within their subject matter domain.

RealityVoid · 3 years ago
Sounds great for passive-agressive nitpicking in pointless arguments!
chrismarlow9 · 3 years ago
So use it in meetings, got it. Kidding of course. It sounds like my professor from a logic philosophy class. All x are y but not all y are x and red herring and whatnot.
jedberg · 3 years ago
Sounds like they trained it on internal Google forums!
hn_throwaway_99 · 3 years ago
I know a bunch of folks are comparing Bard to ChatGPT and saying it is worse. I'll just pipe in that Bard is currently way better than it was when they first announced it.

I use ChatGPT 4 to keep track of my task list and as a "procrastination coach". I tried this with Bard when they first announced it and it just couldn't do it. Now, though, it is pretty much on par with ChatGPT, and even has some niceties, like also automatically keeping track of my completed tasks.

hackernewds · 3 years ago
Note also that OpenAI has been laser focused on building ChatGPT for 8+ years. to those claiming that Google is a much larger company, consider that Google has to accomplish other goals and products along with being a public company. while open, AI has not really struggled with lack of funds or resources that would impart it any sort of disadvantage.

notwithstanding the investor's dilemma, of needing to offset their own core revenue product

ChatGTP · 3 years ago
I’m sure they have plenty of funding but it must be pretty stressful to watch the cash and runway go up in flames like that.

First Bard starting to catchup and now the open source alternatives too.

cfcfcf · 3 years ago
I’d love to hear more about how you do this!
hn_throwaway_99 · 3 years ago
Here is the prompt and workflow that I discussed in a comment from about a month ago: https://news.ycombinator.com/item?id=35390644
foooobaba · 3 years ago
Not bad (chat gpt can do this too) but it seems to be on par for this “simple” task:

> Write me a python function that accepts a list of filenames and uses threads and queues to load them in parallel with the pickle library

> Also, please output the results to a queue as well

Final result

  import pickle
  import threading
  import queue
  
  def load_files_in_parallel(filenames, results_queue):
    """Loads a list of files in parallel with the pickle library and outputs the results to a queue.
  
    Args:
      filenames: A list of filenames to load.
      results_queue: A queue to hold the results of the file loads.
  
    Returns:
      None.
    """
  
    # Create a queue to hold the file handles.
    file_handles = queue.Queue()
  
    # Create a thread pool to load the files.
    threads = []
    for _ in range(len(filenames)):
      thread = threading.Thread(target=load_file, args=(file_handles, filenames.pop(), results_queue))
      threads.append(thread)
      thread.start()
  
    # Wait for all threads to finish.
    for thread in threads:
      thread.join()
  
    # Return the list of objects loaded from the files.
    # No need to return anything since the results are already in the queue.
  
  def load_file(file_handles, filename, results_queue):
    """Loads a file and adds it to the results queue.
  
    Args:
      file_handles: A queue to hold the file handles.
      filename: The name of the file to load.
      results_queue: A queue to hold the results of the file loads.
  
    Returns:
      None.
    """
  
    # Open the file.
    with open(filename, 'rb') as f:
      obj = pickle.load(f)
  
    # Add the object to the results queue.
    results_queue.put(obj)

mdaniel · 3 years ago
matches my experience with LLMs: forcing the onus of accuracy upon the code reviewer

    >>> alpha = ["a", "b", "c"]
    >>> def load_files_in_parallel(filenames):
    ...     for _ in range(len(filenames)):
    ...         _ = filenames.pop()
    ... 
    >>> load_files_in_parallel(alpha)
    >>> alpha
    []
And I mean, just damn, because it was already iterating over the list of filenames using such a convoluted mechanism, what a great opportunity to put the filename into a local var without mutating the caller's list

RamblingCTO · 3 years ago
> Bard isn’t currently supported in your country. Stay tuned!

Lol, I'm in Germany :D I was wondering where all the people saying the same came from. Apparently not any country with an embargo, that was my theory.

benhurmarcel · 3 years ago
https://support.google.com/bard/answer/13575153?hl=en

It looks like it's not opened in any EU country.

M4v3R · 3 years ago
Any idea why? Because of EU's privacy laws?
Sharlin · 3 years ago
Probably not available anywhere but the US. Maybe UK.
r2vcap · 3 years ago
I live in Korea, have been on Bard's waitlist, and have been using it since last week with no problems.
squalo · 3 years ago
It wasn't opened in MX until their expo but working here now. I think the EU countries just scare google because they keep getting sued for violating their data protection laws
swores · 3 years ago
The previous beta was open to UK as I've had access since they released it a while ago - so can't say if it's fully open to UK now or not, but probably.
JLCarveth · 3 years ago
I am Canadian and can't access it either.
pyth0 · 3 years ago
Interesting, I am also Canadian and was able to access it just fine.
lofaszvanitt · 3 years ago
That's what we get after so many have you found this video helpful messages replied with 1 stars? lol
theshrike79 · 3 years ago
Scandinavia, not available either.

I'm guessing this is the typical Google take of "US only".

jeanlucas · 3 years ago
Brazil, not available as well, what a fuck up in communication.
lampington · 3 years ago
Same in Portugal :-(
mdeeks · 3 years ago
This is significantly worse than ChatGPT with GPT-4 for coding. I had an example from last week where I wanted to write a script to delete excess AWS AMIs based on some tags of ours. Then modified it to bucket them by a tag value and keep the last ten of each bucket. Then parallelize it so it is fast (we had 50k of them, oops).

ChatGPT was able to completely write this perfectly. Absolutely mind blowing. Saved me probably 1-2 hours of looking things up and fiddling with it.

Bard failed multiple times in multiple places. It only found AMIs that exactly, down to the second, were two weeks old. The filter json looks wrong ('Key' isn't valid AFAIK). It didn't bucket them and just kept the newest 10.

In either case both of these are mind blowing. If ChatGPT didn't exist, Bard would still be very helpful at getting me started. ChatGPT did it all without me making any changes though.

ChatGPT log: https://gist.github.com/mdeeks/de297f1bc8cbd00fe2db01e6232aa...

Bard log: https://gist.github.com/mdeeks/1404e09da8879b94469166927ddff...

gremlinsinc · 3 years ago
Have you tried phind.com? Besides Codeium (vscode extension) this is my goto for code questions. They have some magic sauce, always better than bing, sometimes I go back to gpt4 just to say hey, and for times where i want to save a thread...
mdeeks · 3 years ago
With their default model it did extremely badly with my question to build this script. It wasn't even close to right.

When I changed it to "Use Best Model" it gave a very good response but I'm pretty certain it was just piped to OpenAI GPT-4.

My prompt:

Write a python script for me that deletes AMIs from AWS that I own which have the tags role=bento-remote and bento_image_type=profile. It should bucket the AMIs by another tag called profile. It should delete images older than 14 days but always keep the 10 most recent even if they are older than 14 days. It should print the name of the AMI it is about to delete. It should run the deletes in parallel so it is fast.