Every 'platform for AI agents' announcement makes me wonder: are we building tools for a workflow that actually exists, or are we building tools and hoping the workflow materializes? The GitHub parallel is instructive because GitHub succeeded by meeting developers where they already were (git). The question for Entire is whether agents have a natural coordination layer yet or if this is premature infrastructure.
I don't think your Github example is accurate. The vast majority of developers started using git after Github became a thing. They may have used svn or another type of collaboration system before, but not git. And the main reason they started using git is because Github was such massive value on top of git, not because git was so amazing.
Git had already replaced perforce and svn most everywhere I'd seen, before GitHub came along. CVS was still horrible and in a lot, though.
I mean, git was '05 and GitHub was '08, so not like the stats will say much one way or another.
StackOverflow only added it their survey in 2015. No source of truth, only anecdotes.
I have to hard disagree on that. I know of many developers personally who were on Source Forge and Google Code before and migrated to GitHub specifically because they offered git
I don't think SVN and Mercurial were more widely used than git before Github became popular, but Github definitely killed off most of the use of those.
It seems at this point, everyone and their mother, i.e. "We", are building the "tools" for which "we" mostly hope that the VC money will materialise. Use-cases are not important - if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?
> if OpenAI can essentially work with Monopolly money, whey can´t "we" do it too?
The answer is, in case anyone wonders: because OpenAI is providing a general purpose tool that has potential to subsume most of the software industry; "We" are merely setting up toll gates around what will ultimately become a bunch of tools for LLM, and trying to pass it off as a "product".
I do not think that's how it worked out for GitHub: I'd rather say that Git (as complex as it was to use) succeeded due to becoming the basis of GitHub (with simple, clean interface).
At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.
Yes, development was being done in SVN but it was a huge pain. Continuous communication was required with the server (history lookups took ages, changing a file required a checkout, etc.) and that was just horribly inefficient for distributed teams. Even within Europe, much more so when cross-continent.
A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.
Yes to all that. And GitLab the company was only founded in 2014 (OSS project started in 2011) and ran through YC in 2015, seven years after GitHub launched.
Of the thousands, a handful will prevail. Most of it is vaporware, just like in any boom. Every single industry has this problem; copy-cats, fakes & frauds.
"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."
If everything goes right, there won't be a coal mine needed.
I'd bet that less people had their source code on git in 2008 than the number of developers using the various coding agents today. And the open-source project that we published today hooks into the existing workflow for those developers, in Claude Code and in Gemini CLI. Time will tell the rest. We will publish regular updates and you can judge us on those results.
At least for me, I have felt like the chat history in an agent is often times just as important and potentially even more important than the source code it generates. The code is merely the compiled result of my explanations of intent and goals. That is, the business logic and domain expertise is trapped in my brain, which isn't very scalable.
Versioning and tracking the true source code, my thoughts, or even the thoughts of other agents and their findings, seems like a logical next step. A hosted central place for it and the infrastructure required to store the immense data created by constantly churning agents that arrive at a certain result seems like the challenge many seem to be missing here.
We are building tools and hoping an exit materializes. There’s so much funny money in AI right now, getting life-altering money seems easily attainable
HN is full of AI agents hype posts. I am yet to see legitimate and functional agent orchestration solving real problems, whether it is scale or velocity.
This is the point of that post and helpfully it was added at the top in a TL;dr and was half of that t sentence TL;dr. Will succeed or not? Well, that's a coin toss, always been.
I mean, pretty much all big startups begin as "niche" things that people might care about later. Tesla, Airbnb, Twitch... and countless failures too. It's just how the game is.
> Checkpoints are a new primitive that automatically captures agent context as first-class, versioned data in Git. When you commit code generated by an agent, Checkpoints capture the full session alongside the commit: the transcript, prompts, files touched, token usage, tool calls and more.
This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.
What kind of barrier/moat/network effects/etc would prevent someone with a Claude Code subscription from replicating whatever "innovation" is so uniquely valuable here?
It's somewhat strange to regularly read HN threads confidently asserting that the cost of software is trending towards zero and software engineering as a profession is dead, but also that an AI dev tool that basically hooks onto Git/Claude Code/terminal session history is worth multiples of $60+ million dollars.
I built a basic copy with about an hour with my own "platform for ai agents" that I built out over the last week: https://github.com/jwbron/egg/pull/504, and refined it here: https://github.com/jwbron/egg/pull/517 (though right after I merged this I blew through my weekly token quota for my second claude max 20x account so I haven't been able to test it out yet).
I think your point is valid and I've been having the same thoughts. My tooling is still in the experimental phase, but I can move so quickly that I'm having trouble grasping how products like this will survive. If I can build this out in a week and copy an idea like this one (which is a great one, mind you) in an hour, what's the value of paying someone for a product like this vs just building it myself?
> What kind of barrier/moat/network effects/etc would prevent someone with a Claude Code subscription from replicating whatever "innovation" is so uniquely valuable here?
You are correct, that isn't the moat. Writing the software is the easy part
There's no way this company is just a few git and claude hooks with a CLI. They're definitely working on a SASS - something else that isn't open source that this primitive is the basis of. Like a GitHub for agent code
I have never seen any thread that unanimously asserts this. Even if they do, having HN/reddit asserting something as evidence is wrong way to look at things.
I currently develop small utilities with the help of AI, but am far from vibe coding or using agents. I review every single suggestion and do some refactoring at each step, before any commit (sometimes heavy refactoring; sometimes reorganizing everything).
In my experience LLMs tend to touch everything all of the time and don't naturally think about simplification, centralization and separation of concerns. They don't care about structure, they're all over the place. One needs to breathe on their shoulders to produce anything organized.
Maybe there's a way to give them more autonomy by writing the whole program in pseudo-code with just function signatures and let them flesh it out. I haven't tried that yet but it may be interesting.
Well a famous name is attached, could be the start of the product that replaces github, building github2 would give oppertunity to fix mistakes that are too entrenched to change at github, and who better to try? I'm uncharacteristically optimistic on this one, I'd give it a try!
We have had this for ages now.... I just don't have access to the sort of people willing to pass me 60m for that. I never thought it to be worth anything really ; it was a trivial to implement afterthought.
I love this one so much! The arbitrary decision to cherry-pick critique a particular product to this degree, when it’s something that could be said about 99% of the stuff SV churns out, including in all likelihood anything you’ve ever worked on.
For the last three or four months, what I've been doing is anytime I have Claude write a comment on an issue, it just adds a session ID, file path and the VM it is on. That way, whenever we have some stuff that comes up, we just search through issues and then we can also retrace the session that produced the work and it's all traceable. In general, I just work through gitea issues and sometimes beads. I couldn't stand having all these MD files in my repo because I was just drowning in documentation, so having it in issues has been working really nicely and agents know how to work with issues. I did have it write a gitea utility and they are pretty happy using/abusing it. Anytime I see that they call it in some way that generates errors, I just have them improve the utility. And by this point, it pretty much always works. It's been really nice.
I haven't read the article yet but this conversation reminds me of Docker. Lots of people "didn't get it." I told them at the time: if you don't get it you aren't ready for it yet so don't worry about it. When you do need it, you'll get it and then you'll use it and never look back. Look at where we are with containers now.
Wow, read through the comments and you weren't joking. I attribute this to crossroads of "this release is v0.1 of what we are building" and the HN crowd who have been scrolling past 120 AI frameworks and hot takes daily and have no patience for anything that isn't immediately 100% useful to them in the moment.
I find the framing of the problem to be very accurate, which is very encouraging. People saying "I can roll my own in a weekend" might be right, but they don't have $60M in the bank, which makes all the difference.
My take is this product is getting released right now because they need the data to build on. The raw data is the thing, then they can crunch numbers and build some analysis to produce dynamic context, possibly using shared patterns across repos.
Despite what HN thinks, $60M doesn't just fall in your lap without a clear plan. The moat is the trust people will have to upload their data, not the code that runs it. I expect to see some interesting things from this in the coming months.
100% agree because there’s a lot of value in understanding how and why past code was written. It can be used to make better decisions faster around code to write in the future.
E.g., if you’ve ever wondered why code was written in a particular way X instead of Y then you’ll have the context to understand whether X is still relevant or if Y can be adopted.
E.g., easier to prompt AI to write the next commit when it knows all the context behind the current/previous commit’s development process.
But that's not what is in the whole context. The whole context contains a lot of noise and false "thoughts".
What the AI needs to do is to document the software project in an efficient manner without duplication. That's not what this tool is doing.
I question the value in storing all the crap in git.
ehhhh is it really that useful though? Sounds way more noisy than anything, and a great way to burn through tokens. It's like founding a startup to solve the problem of people squashing their commits. Also, it sounds like something Claude Code/Codex/etc could quickly add an extension for.
Maybe use critical thinking instead of a mindless dismissal?
The fact that you aren't haven't offered a single counterargument to any other posters' points and have to resort to pearl-clutching is pretty good proof that you can't actually respond to any points and are just emotionally lashing out.
This is literally what claude code already does minus the commit attachment. It’s just very fancy marketing speak for the exact same thing.
I’m happy to believe maybe they’ll make something useful with 60M (quite a lot for a seed round though), but Maybe not get all lyrical about what they have now.
Please don't use quotation marks to make it look like you're quoting someone when you aren't. That's an internet snark trope and we're trying to avoid those on HN.
Look it’s obvious at this point to anyone who is actually using the tools.
We can articulate it but why should we bother when it’s so obvious.
We are at an inflection point where discussion about this, even on HN, is useless until the people in the conversation are on a similar level again. Until then we have a very large gap in a bimodal distribution, and it’s fruitless to talk to the other population.
Some Tom Dick and Harry to VCs: I have a proposal for you.
VCs: what is it
Tom Dick & Harry: AI
VCs: get the ** out of here, we already burnt enough money and will never see it back
Tom Dick & Harry: hear me out this is different
VCs: ok you have 5 minutes to explain me your product
Tom Dick & Harry: I dont have one
VCs: get the ** out of here
Tom Dick & Harry: hear me out
VCs: ok, you have 30 seconds to impress us.
Tom Dick & Harry: I just quit Microslop and still have high level contacts there
VCs: Hot damn!!! you are our lottery ticket to recoup all the money we have lost in other ventures. This is going to be a race against time, before your contacts go stale. Here's 60M for you, wine and dine your friends with it. On your way out you will find some AI generated product names and some vague product descriptions. Pick one and slap it on some website and announce our deal. Now get the ** out of here.
I have CURRENT_TASK.md that does more or less the same thing. It also gets committed to git. So I guess that’s entire? Wish I’d realized I was sitting on a 60M idea…
It's sad to see that ex-GitHub CEO didn't make enough money to just kick-start his company, but needs external money which will later on dictate how the company works or will sell users and the product for the next exit..
> Spec-driven development is becoming the primary driver of code generation.
This sounds like my current "phase" of AI coding. I have had so many project ideas for years that I can just spec out, everything I've thought about, all the little ideas and details, things I only had time to think about, never implement. I then feed it to Claude, and watch it meet my every specification, I can then test it, note any bugs, recompile and re-test. I can review the code, as you would a Junior you're mentoring, and have it rewrite it in a specific pattern.
Funnily enough, I love Beads, but did not like that it uses git hooks for the DB, and I can't tie tickets back to ticketing systems, so I've been building my own alternative, mine just syncs to and from github issues. I think this is probably overkill for whats been a solved thing: ticketing systems.
I am going lower level - every individual work item is a "task.md" file, starts initially as a user ask, then add planning, and then the agent checks gates "[ ]" on each subtask as it works through it. In the end the task files remain part of the project, documenting work done. I also keep an up to date mind map for the whole project to speed up start time.
And I use git hooks on the tool event to print the current open gate (subtask) from task.md so the agent never deviates from the plan, this is important if you use yolo mode. It might be an original technique I never heard anyone using it. A stickie note in the tool response, printed by a hook, that highlights the current task and where is the current task.md located. I have seen stretches of 10 or 15 minutes of good work done this way with no user intervention. Like a "Markdown Turing Machine".
That's hilarious, I called it gates too for my reimplementation of Beads. Still working on it a bit, but this is the one I built out a month back, got it into git a week ago.
For me a gate is: a dependency that must pass before a task is closed. It could be human verification, unit testing, or even "can I curl this?" "can I build this?" and gates can be re-used, but every task MUST have one gate.
My issue with git hooks integration at that level is and I know this sounds crazy, but not everyone is using git. I run into legacy projects, or maybe its still greenfield as heck, and all you have is a POC zip file your manager emailed you for whatever awful reason. I like my tooling to be agnostic to models and external tooling so it can easily integrate everywhere.
Yours sounds pretty awesome for what its worth, just not for me, wish you the best of luck.
Me too. I've been using spec-kitty [0], a fork of Spec Kit. Quite amazing how a short interview on an idea can produce full documents of requirements, specs, tasks, etc. After a few AI projects, this is my first time using spec driven development, and it is definitely an improvement.
Task management is fundamentally straightforward and yet workflow specific enough that I recommend everyone just spend a few hours building their own tools at this point.
I started off with the original beads and it was definitely a nightmare. However I would recommend using https://github.com/Dicklesworthstone/beads_rust - it's a much simpler implementation of the same concept, without all the random extra stuff thrown on to support Gas Town.
> Checkpoints run as a Git-aware CLI. On every commit generated by an agent, it writes a structured checkpoint object and associates it with the commit SHA. The code stays exactly the same, we just add context as first-class metadata. When you push your commit, Checkpoints also pushes this metadata to a separate branch (entire/checkpoints/v1), giving you a complete, append-only audit log inside your repository. As a result, every change can now be traced back not only to a diff, but to the reasoning that produced it.
The context for every single turn could in theory be nearly 1MB. Since this context is being stored in the repo and constantly changing, after a thousand turns, won't it make just doing a "git checkout" start to be really heavy?
For example, codex-cli stores every single context for a given session in a jsonl file (in .codex). I've easily got that file to hit 4 GB in size, just working for a few days; amusingly, codex-cli would then take many GB of RAM at startup. I ended up writing a script that trims the jsonl history automatically periodically. The latest codex-cli has an optional sqlite store for context state.
My guess is that by "context", Checkpoints doesn't actually mean the contents of the context window, but just distilled reasoning traces, which are more manageable... but still can be pretty large.
> won't it make just doing a "git checkout" start to be really heavy?
not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.
we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.
i mean all that is trivial. not worth a $60MM investment.
i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.
if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.
finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.
I mean, git was '05 and GitHub was '08, so not like the stats will say much one way or another. StackOverflow only added it their survey in 2015. No source of truth, only anecdotes.
The answer is, in case anyone wonders: because OpenAI is providing a general purpose tool that has potential to subsume most of the software industry; "We" are merely setting up toll gates around what will ultimately become a bunch of tools for LLM, and trying to pass it off as a "product".
At the time, there were multiple code hosting platforms like Sourceforge, FSF Savannah, Canonical's Launchpad.net, and most development was still done in SVN, with Git, Bazaar, Mercurial the upstart "distributed" VCSes with similar penetration.
A DVCS was definitely required. And I would say git won out due to Linus inventing and then backing it, not because of a platform that would serve it.
"Buy my fancy oil for your coal shovel and the coal will turn into gold. If you pay for premium, you don't have to shovel yourself."
If everything goes right, there won't be a coal mine needed.
Versioning and tracking the true source code, my thoughts, or even the thoughts of other agents and their findings, seems like a logical next step. A hosted central place for it and the infrastructure required to store the immense data created by constantly churning agents that arrive at a certain result seems like the challenge many seem to be missing here.
I wish you the best of luck with your startup.
my code is 90% ai generated at this point
Deleted Comment
This is the point of that post and helpfully it was added at the top in a TL;dr and was half of that t sentence TL;dr. Will succeed or not? Well, that's a coin toss, always been.
This thread is extremely negative - if you can't see the value in this, I don't know what to tell you.
It's somewhat strange to regularly read HN threads confidently asserting that the cost of software is trending towards zero and software engineering as a profession is dead, but also that an AI dev tool that basically hooks onto Git/Claude Code/terminal session history is worth multiples of $60+ million dollars.
I think your point is valid and I've been having the same thoughts. My tooling is still in the experimental phase, but I can move so quickly that I'm having trouble grasping how products like this will survive. If I can build this out in a week and copy an idea like this one (which is a great one, mind you) in an hour, what's the value of paying someone for a product like this vs just building it myself?
You are correct, that isn't the moat. Writing the software is the easy part
This is not their offering, this is a tool to raise interest.
I have never seen any thread that unanimously asserts this. Even if they do, having HN/reddit asserting something as evidence is wrong way to look at things.
In my experience LLMs tend to touch everything all of the time and don't naturally think about simplification, centralization and separation of concerns. They don't care about structure, they're all over the place. One needs to breathe on their shoulders to produce anything organized.
Maybe there's a way to give them more autonomy by writing the whole program in pseudo-code with just function signatures and let them flesh it out. I haven't tried that yet but it may be interesting.
I still remember the reaction when Dropbox was created: "It's just file sharing; I can build my own with FTP. What value could it possibly create".
It's because of everybody there.
Currently no one is on Entire - the investor are betting they will be.
Deleted Comment
Deleted Comment
If it were also their last, I would be inclined to agree.
I find the framing of the problem to be very accurate, which is very encouraging. People saying "I can roll my own in a weekend" might be right, but they don't have $60M in the bank, which makes all the difference.
My take is this product is getting released right now because they need the data to build on. The raw data is the thing, then they can crunch numbers and build some analysis to produce dynamic context, possibly using shared patterns across repos.
Despite what HN thinks, $60M doesn't just fall in your lap without a clear plan. The moat is the trust people will have to upload their data, not the code that runs it. I expect to see some interesting things from this in the coming months.
Runs git checkpoint every time an agent makes changes?
E.g., if you’ve ever wondered why code was written in a particular way X instead of Y then you’ll have the context to understand whether X is still relevant or if Y can be adopted.
E.g., easier to prompt AI to write the next commit when it knows all the context behind the current/previous commit’s development process.
That's how a trillion dollar company also does it, turns out.
0: https://github.com/karthink/gptel
The fact that you aren't haven't offered a single counterargument to any other posters' points and have to resort to pearl-clutching is pretty good proof that you can't actually respond to any points and are just emotionally lashing out.
I’m happy to believe maybe they’ll make something useful with 60M (quite a lot for a seed round though), but Maybe not get all lyrical about what they have now.
Deleted Comment
https://news.ycombinator.com/newsguidelines.html
We can articulate it but why should we bother when it’s so obvious.
We are at an inflection point where discussion about this, even on HN, is useless until the people in the conversation are on a similar level again. Until then we have a very large gap in a bimodal distribution, and it’s fruitless to talk to the other population.
Dead Comment
VCs: what is it
Tom Dick & Harry: AI
VCs: get the ** out of here, we already burnt enough money and will never see it back
Tom Dick & Harry: hear me out this is different
VCs: ok you have 5 minutes to explain me your product
Tom Dick & Harry: I dont have one
VCs: get the ** out of here
Tom Dick & Harry: hear me out
VCs: ok, you have 30 seconds to impress us.
Tom Dick & Harry: I just quit Microslop and still have high level contacts there
VCs: Hot damn!!! you are our lottery ticket to recoup all the money we have lost in other ventures. This is going to be a race against time, before your contacts go stale. Here's 60M for you, wine and dine your friends with it. On your way out you will find some AI generated product names and some vague product descriptions. Pick one and slap it on some website and announce our deal. Now get the ** out of here.
Deleted Comment
```markdown # Run NNNN
## First Impressions [What state is the project in? What did the last agent leave?]
## Plan [What will you work on this iteration? Why?]
## Work Log [Fill this in as you work]
## Discoveries [What did you learn? What surprised you? What should the next agent know?]
## Summary [Fill this in before committing] ```
This is surprisingly effective and lets agents easily continue in progress work and understand past decisions.
So.. yea. Ignore and move on.
This sounds like my current "phase" of AI coding. I have had so many project ideas for years that I can just spec out, everything I've thought about, all the little ideas and details, things I only had time to think about, never implement. I then feed it to Claude, and watch it meet my every specification, I can then test it, note any bugs, recompile and re-test. I can review the code, as you would a Junior you're mentoring, and have it rewrite it in a specific pattern.
Funnily enough, I love Beads, but did not like that it uses git hooks for the DB, and I can't tie tickets back to ticketing systems, so I've been building my own alternative, mine just syncs to and from github issues. I think this is probably overkill for whats been a solved thing: ticketing systems.
And I use git hooks on the tool event to print the current open gate (subtask) from task.md so the agent never deviates from the plan, this is important if you use yolo mode. It might be an original technique I never heard anyone using it. A stickie note in the tool response, printed by a hook, that highlights the current task and where is the current task.md located. I have seen stretches of 10 or 15 minutes of good work done this way with no user intervention. Like a "Markdown Turing Machine".
```
```Just update it to iterate over your file. It should be a little easier to manage than git hooks and can hammer in testing.
For me a gate is: a dependency that must pass before a task is closed. It could be human verification, unit testing, or even "can I curl this?" "can I build this?" and gates can be re-used, but every task MUST have one gate.
My issue with git hooks integration at that level is and I know this sounds crazy, but not everyone is using git. I run into legacy projects, or maybe its still greenfield as heck, and all you have is a POC zip file your manager emailed you for whatever awful reason. I like my tooling to be agnostic to models and external tooling so it can easily integrate everywhere.
Yours sounds pretty awesome for what its worth, just not for me, wish you the best of luck.
https://github.com/Giancarlos/GuardRails
I'm confused how this is any different to the pretty standard agentic coding workflow?
[0]: https://github.com/Priivacy-ai/spec-kitty
Beads is a nightmare.
The context for every single turn could in theory be nearly 1MB. Since this context is being stored in the repo and constantly changing, after a thousand turns, won't it make just doing a "git checkout" start to be really heavy?
For example, codex-cli stores every single context for a given session in a jsonl file (in .codex). I've easily got that file to hit 4 GB in size, just working for a few days; amusingly, codex-cli would then take many GB of RAM at startup. I ended up writing a script that trims the jsonl history automatically periodically. The latest codex-cli has an optional sqlite store for context state.
My guess is that by "context", Checkpoints doesn't actually mean the contents of the context window, but just distilled reasoning traces, which are more manageable... but still can be pretty large.
not really? doesn't git checkout only retrieve the current branch? the checkpoint data is in another branch.
we can presume that the tooling for this doesn't expect you to manage the checkpoint branch directly. each checkpoint object is associated with a commit sha (in your working branch, master or whatever). the tooling presumably would just make sure you have the checkpoints for the nearby (in history) commit sha's, and system prompt for the agent will help it do its thing.
i mean all that is trivial. not worth a $60MM investment.
i suspect what is really going on is that the context makes it back to the origin server. this allows _cloud_ agents, independent of your local claude session, to pick up the context. or for developer-to-developer handoff with full context. or to pick up context from a feature branch (as you switch across branches rapidly) later, easily. yes? you'll have to excuse me, i'm not well informed on how LLM coding agents actually work in that way (where the context is kept, how easy it is to pick it back up again). this is just a bit of opining based on why this is worth 20% of $300MM.
if i look at https://chunkhound.github.io it makes me think entire is a version of that. they'll add an MCP server and you won't have to think about it.
finally, because there is a commit sha association for each checkpoint, i would be worried that history rewrites or force pushes MUST use the tooling otherwise you'd end up screwing up the historical context badly.