I’ve been using AI to write with great success. Mostly business documents. My general process is this:
Think of the concept I want to write about as well as the supporting evidence for the topic. Ask ChatGPT to write me something in my target format using the topic and supporting evidence as input. What I get back is essentially a well-written skeleton that I can use to fill in additional details. Finally, I pass my revisions through ChatGPT to touch up any errors, rephrase wordy things, etc. I lightly edit the final draft and I usually have an excellent result.
So you are providing sensitive business information/facts to a third party service that's likely going to use those for training, analysis, store it, etc. ?
It should be fine for most places I guess - but I suspect a decent amount will have a problem with this.
This is my main reservation about copilot as well (quality issues aside).
> So you are providing sensitive business information/facts to a third party service that's likely going to use those for training, analysis, store it, etc. ?
Every business needs to make their own decision. Personally, I’m not worried about OpenAI using my data, but I understand others might be. That being said, I already give Amazon literally all my data about my business via AWS and Google gets a copy of all my documents, so providing this data isn’t entirely unprecedented.
The symmetric flow of information back and forth between yourself and the AI assistant is the key distinction here. It's a very beneficial, symbiotic relationship.
The problem will be the asymmetric, uni-directional flow to those whose sole function is mindless consumption of AI-generated content.
"Haven't I taught you anything? What have I always told you? Never trust anything that can think for itself if you can't see where it keeps its brain?”
J.K. Rowling, Harry Potter and the Chamber of Secrets.
Jokes aside, do be careful. A prolonged interaction with LLM agents had resulted in at least one Googler being terminated on their jobs.
On the other hand, I would not be surprised, if they’ll make millions now, suing Google citing the job hazards exposure. And that ChatGPT reasoning abilities and empathic skills are maybe already above the median human. As a result, in a median case, such interactions might result in an effect similar to an interaction with a good teacher.
Not exactly a balanced symbiosis. Certainly works to the enormous benefit of whoever controls the AI. Eventually it becomes some flavor of omniscient. It submits the papers, experts become reviewers?
I was doing this with work.
Dot points were becoming paragraphs. Seemingly for others comfort.
Yet, the dot points had all the information. So am I still preparing the paragraphs?
Maybe the change we need isn't AI assist, but a break in the conventions around communication at work so we can all be more robotic and terse.
I think you're discounting the benefit of a well-prepared argument. The order in which information is introduced prepares the reader's mind to be receptive to an argument. Phrasing is also important. Certain things sound natural; others sound needlessly verbose and cumbersome.
One interesting thing I've found about ChatGPT is that it removes a lot of unnecessary information from my final drafts. The information removed usually doesn't add to the overall point, and it reads so much better without it. In this way, ChatGPT is making things more terse.
I often pass my business communications through GPT to summarize the key points. The summary usually has all the important information, and I just send that instead.
Now if I could just learn to write that way in the first place, it would save me a lot of time and effort...
I have a similar workflow and love using chatgpt for this use-case. But, to highlight some issues, I found that writing some prompts took about as much time as just writing the document myself. I suppose it was less mental effort because I could be lazier knowing chatgpt would clean up the grammer.
You have to be rather precise with the prompt language to get the desired outcome.
However, I also use ChatGPT to make the updates for me, adding additional information and context, which I then ask to be integrated into the document (e.g. update the introduction to include...)
I look forward to most tech writing done with AI. Writing documentation and requirements requires a lot of effort to keep consistent and up to date. Having an AI look at your code and config and then write a nice report will easily beat 99% of all tech docs in companies.
I'd love for this to be the case, but often the most(/only) important part of a tech doc is the "why," i.e. the background knowledge about the business itself, or the conceptual framework that underlies the code. Even more so for writing requirements. How would AI be able to help with that? Maybe by parsing through all related meeting transcripts as well?
I have thrown a set of our requirements and some helm charts at it and the summary it produced was pretty good. It needs some work but I think it’s just a matter of time until the output is pretty good. The bar for most documentation is pretty low.
The spate of AI "artwork" I have seen over the past few months has seemed to me to be good "prompts" for artists. Many of what I have seen has a germ of something interesting in it — but is often missing in other regards.
An obvious example that comes to mind are the recent "Jordowsky Tron" images [1]. Any art director could comb those images, consider changes here, there — and end up with something better than the AI.
I guess how derivative you think the final results are depends in part on how much "artist's prerogative" the art director employs, how derivative you think the AI prompts are to begin with, how derivative you see all art....
I've already been using AI art to help brainstorm/conceptualize my own artworks. It's a great tool to use alongside others, but you're right that most of it is extremely derivative. It's easy to end up with something that looks hackneyed (and reminiscent of hotel art) if you're not careful. That said, I do feel it allows me to push past my previous limits of composition, because it helps me to single out what creates interest in a piece and "rapid prototype" concepts.
That’s a misinterpretation of what art is. In art, the important part is the idea. You then use your technique to convey that idea (often the idea is intimately linked to the aesthetic). If an ai gives you both an idea and a style as your prompts, you’re not really contributing to the final work in the way you should be
I'm hoping for a tool that cannot just correct my English style, but can also play the advocatus diaboli in a little speech bubble while I'm writing philosophy papers. It ought to constantly try to disprove me, though only on a per-paragraph basis and by making mild suggestions ("Could you give an example here?", "On the other hand,...", "What if...", "Isn't this what <X> calls...?", "Isn't there a missing premise in this argument?",...).
Doesn't look like it. Parent acknowledged the existence of tools like those, which only correct their english style. Note they said they were writing philosophy papers and wanted a tool that could be constantly trying to disprove them.
It wasn't so much a critic of your article as thinking about coping mechanisms when the volume of articles being put out is significantly increased and nobody will be able or want to deal with all this noise. Fight GPT by GPT :)
I liked the post's title more than the article's contents. It got me wondering about the difference between language and thought, if there actually is any. Is thought just unspoken language? Do we think if we don't know a language?
Is there a subset of any human language that can be described by a formal regular grammar without giving up the expressiveness of human language? Is there a notion of "Turing completeness" but for thought rather than computation?
I’m able to suppress my inner voice for a couple of seconds. Similar to meditation, it’s not easy. I find it striking how much the level of thinking devolves in that language-less state, including abstract thought. I imagine that it is a like how an animal’s consciousness may feel.
> Is thought just unspoken language? Do we think if we don't know a language?
I find it pretty obvious that there are many ways of thinking that are not language, just consider abstract concepts in math and related fields. When working my way through math and programming problems, a large part of my thinking is not through words but.. Some kind of visualization?
I know what you mean. I wonder whether it's useful to distinguish thinking from that "mind's eye" imaging capability that I believe you're describing. They might be separate things. It's pretty evident that a cat or a dog, or even a fish, has memory. Is memory the ability to conjure things in the mind's eye? If remembering one image reminds us of other images, then language-free thought might be moving through a chain of images. Perhaps humans are good at creating such chains in their heads, while a fly or an ant can't do much more beyond matching a memory to their actual instantaneous sensory input, and reacting accordingly. Substitute "image" with symbolic memories, and that might be the start of abstract reasoning.
In other words, there's probably basic consciousness at one end of a spectrum, and thinking evidenced by language on the other end. Somewhere between those two, one might draw a line between thought and mere conscious awareness.
I wrote the article, and am personally extremely interested in this angle. What would happen to writing if we could directly transfer ideas without any kind of mediation? Would we need to "translate" at all? Would we still want to write for the beauty of it, and read for the meaning it adds to our lives?
Yep, yet another post on AI and the future of content creation.
Looking at it from a different angle - what if AI could search your internal docs, and help you problem-solve? Aka help you exploit past knowledge to inform future decisions?
The hope: GPT will democratize creation, not fill the internet with shitty articles.
Democratize creation and filling the internet with shitty articles are synonymous. Not that this is a problem or a particularly scary prospect, bookstores are already filled to the brink with garbage, it's just a game of numbers at this point.
Searching your internal docs is an interesting one, but it's still unclear what this can do that grep can't. The leap forward would be ability to reason autonomously, but we're as far from that as we've ever been.
Yep that’s the first great use case I’ve thought of - looking through all the documents in a group and answering a specific question. Also can imagine multi step pipelines driven by answers to previous questions.
Large Pharma and Life science companies generate huge amounts of documentation around change. They have a huge historical corpus of categorized structured documents that have been reviewed and approved. The quality should be good. Can definitely see a draft document using AI option in the future.
The commenter you're replying to submitted the article, I think they were trying to get people to hold off on prejudging it as Yet Another AI Submission by stating that it's actually got a slightly different message than most
It seemed rather light on details to me, like do you upload a whole bunch of documents onto the site and then it builds a model based upon that that it uses to query?
Think of the concept I want to write about as well as the supporting evidence for the topic. Ask ChatGPT to write me something in my target format using the topic and supporting evidence as input. What I get back is essentially a well-written skeleton that I can use to fill in additional details. Finally, I pass my revisions through ChatGPT to touch up any errors, rephrase wordy things, etc. I lightly edit the final draft and I usually have an excellent result.
It should be fine for most places I guess - but I suspect a decent amount will have a problem with this.
This is my main reservation about copilot as well (quality issues aside).
Every business needs to make their own decision. Personally, I’m not worried about OpenAI using my data, but I understand others might be. That being said, I already give Amazon literally all my data about my business via AWS and Google gets a copy of all my documents, so providing this data isn’t entirely unprecedented.
The problem will be the asymmetric, uni-directional flow to those whose sole function is mindless consumption of AI-generated content.
J.K. Rowling, Harry Potter and the Chamber of Secrets.
Jokes aside, do be careful. A prolonged interaction with LLM agents had resulted in at least one Googler being terminated on their jobs.
On the other hand, I would not be surprised, if they’ll make millions now, suing Google citing the job hazards exposure. And that ChatGPT reasoning abilities and empathic skills are maybe already above the median human. As a result, in a median case, such interactions might result in an effect similar to an interaction with a good teacher.
Still, none of this is very well tested.
Maybe the change we need isn't AI assist, but a break in the conventions around communication at work so we can all be more robotic and terse.
One interesting thing I've found about ChatGPT is that it removes a lot of unnecessary information from my final drafts. The information removed usually doesn't add to the overall point, and it reads so much better without it. In this way, ChatGPT is making things more terse.
Now if I could just learn to write that way in the first place, it would save me a lot of time and effort...
You have to be rather precise with the prompt language to get the desired outcome.
Still, overall, it's an impressive capability.
However, I also use ChatGPT to make the updates for me, adding additional information and context, which I then ask to be integrated into the document (e.g. update the introduction to include...)
Those and design docs. It can also reason over code somewhat; it may be possible for the model to intuit design constraints from code.
An obvious example that comes to mind are the recent "Jordowsky Tron" images [1]. Any art director could comb those images, consider changes here, there — and end up with something better than the AI.
I guess how derivative you think the final results are depends in part on how much "artist's prerogative" the art director employs, how derivative you think the AI prompts are to begin with, how derivative you see all art....
[1] https://www.facebook.com/groups/officialmidjourney/posts/454...
The best outputs from a model like Midjourney go way beyond mere prompts for other artists.
https://www.midjourney.com/showcase/top/
https://www.youtube.com/watch?v=RdwCE8ScPk4
[0]:https://hemingwayapp.com/
[1]: https://www.deepl.com/write
Given the amount of noise from people churning out low quality content today, I find this unlikely.
Is there a subset of any human language that can be described by a formal regular grammar without giving up the expressiveness of human language? Is there a notion of "Turing completeness" but for thought rather than computation?
I find it pretty obvious that there are many ways of thinking that are not language, just consider abstract concepts in math and related fields. When working my way through math and programming problems, a large part of my thinking is not through words but.. Some kind of visualization?
In other words, there's probably basic consciousness at one end of a spectrum, and thinking evidenced by language on the other end. Somewhere between those two, one might draw a line between thought and mere conscious awareness.
Looking at it from a different angle - what if AI could search your internal docs, and help you problem-solve? Aka help you exploit past knowledge to inform future decisions?
The hope: GPT will democratize creation, not fill the internet with shitty articles.
Searching your internal docs is an interesting one, but it's still unclear what this can do that grep can't. The leap forward would be ability to reason autonomously, but we're as far from that as we've ever been.
But there definitely is an upside for those who separate writing from communicating.