My number one requirement for a tool like this is that the JSON content never leaves the machine it's on.
I can only imagine the kind of personal information or proprietary internal data that has been unwittingly transmitted due to tools like this.
If my objective was to gain the secrets of various worldwide entities, one of the first things I would do is set up seemingly innocent Pastebins, JSON checkers, online file format convertors and permanently retain all submitted data.
Personal requirements aside (I have the same requirements); just using this would constitute misconduct at the very least at my place of work.
Yes it's a cool looking tool, but there are certslain requirements that ignorance doesn't exempt us from.
My pet gripe is all of the seemingly local (open source) tools that phone home with opt-out metrics, not mentioned in the "getting started" and take some obscure flag to disable and it's just that little bit more complex to do when running the defacto (containerised) build.
> My pet gripe is all of the seemingly local (open source) tools that phone home with opt-out metrics, not mentioned in the "getting started" and take some obscure flag to disable and it's just that little bit more complex to do when running the defacto (containerised) build.
I worked at a $massive_tech_company_with_extreme_secrecy and using these tools was expressly forbidden because of the risk. Maybe one exists, but I would gladly pay $20 for a Mac app that could do all of this locally: like a Markdown Pro type app but for JSON formatting and validation. I want to simply open the app, paste in some json and have it format it to my requirements (spaces/tabs/pretty/etc.)
Completely agree. I could actually get a lot of use out of a tool like this, but the fact that even the VSCode extension sends the JSON to their servers and opens it at a publicly accessible URL makes this a no-go for me. I wouldn't recommend anyone use this for any remotely sensitive data.
the extension apparently can be configured to use a locally running instance of the server. But yes, by default it uses the remote version, and thus you post publicly the json, which may or may not be ideal depending on what you're doing.
Eric here (one of the creators of JSON Hero) and this is a really good point. We built JSON Hero earlier this year and partly wanted to use it to try out Cloudflare Workers and Remix, hence the decision to store in KV and use that kind of architecture. We're keen to update JSON Hero with better local-only support for this reason, and to make it easier to self-host or run locally.
There are instructions in the readme to 'run locally' - are you saying that even that version (running on localhost:8787) is sending something back to y'all, either from the client in the browser or sending something back via the locally-running server?
I was totally about to clone this repo and run it locally so I can play with some internal json.
This reminds me of an "Online HTML Minifier" website that analyzed the text and included affiliate links for random words within the text.
And they operated for years, when someone noticed links on their own website, they haven't added themselves and tried to figure out, how it happened, because nobody else had access to the website.
My tool flatterer: https://lite.flatterer.dev/ converts deeply nested JSON to csv/xlsx, is done in web assembly in the browser.
It hard to prove that it is not sending data to a server, so it can be trusted. I know people could check dev tools but that is error prone and some users may not be able to do it.
I wish there was an easy way to prove this to users as it would make online tools like this much more attractive.
I think there is an easy way to prove this to users. Make your thing be a single page self contained html file which they save into the hard disk. Then they can trust the restricted permissions with which chrome runs such local files.
If you have a tech savvy audience they can also view your thing in an iframe with only sandbox="allow-scripts" to prove that it's not making network requests.
I wrote an html/js log viewer with those security models https://GitHub.com/ljw1004/seaoflogs - it handles up to 10kline log files decently, all locally.
If anyone wants to try it out, but doesn't want to send them your Json, here's an example of some real world data https://jsonhero.io/j/t0Vp6NafO2p2
For me, this is harder to use than reading the JSON in a colour text editor such as VSCode. I'm getting less information on the page, and its harder to scan, but that might be because I'm used to reading JSON.
See also jsoncrack [1] which visualises JSON as n-ary tree data-structures.
This project takes a different approach, in that it handles the displaying of JSON node leaf data in a more human way. E.g for hex colours showing a colour picker or a date picker for dates.
What sets this tool apart however is the static analysis of the JSON data, which in doing so can uncover divergences or outliers in the data. E.g a single null value somewhere, or supposedly data which deviates from the majority data-type (i.e a number where every other value is a string).
I think there's value proposition in just edge case detection. Datasets can be massive, with something like JSON there exists no formal type verification. Although to be honest, I don't see a valid reason to use JSON as a backend given graph based databases with type-safe schemas exist.
Tried it out on some REST response from a local test server.
And, well, as much as I applaud the effort, I also think that I'll stick to my text editor for browsing JSON data and to jq for extracting data from it.
My text editor because it's easy to perfom free text search and to fold sections, and that's all that I need to get an overview.
Jq because it's such a brilliantly sharp knife for carving out the exact data that out want. Say I had to iterate a JSON array of company departments, each with a nested array of employees, and collect everyone's email. A navigational tool doesn't help a whole lot but it's a jq one liner. Jq scales to large data structures in a way that no navigational tool would ever do.
Also, there is the security issue of pasting potentially sensitive data into a website.
The first thing I see when I go to the site: JSON SUCKS
Uh... It does? I remember when XML was the main data interchange format of the web. That sucked. JSON is amazing, terrific, wonderful, etc. in comparison.
> I remember when XML was the main data interchange format of the web. That sucked.
I wonder why - apart from the "Should this be an element or an attribute?" issues and oddities in various implementations, XML doesn't seem like the worst thing ever.
Actually, in a web development context, I'd argue that WSDL that was used with SOAP was superior to how most people worked with REST (and how some do), since it's taken OpenAPI years to catch up and codegen is still not quite as widespread, despite notable progress: https://openapi-generator.tech/
What does leave a sour taste, however, is the fact that configuration turned into XML hell (not in a web context, but for apps locally) much like we have YAML hell nowadays, as well as people being able to focus on codegen absolved them of the need to pay lots of attention towards how intuitive their data structures are.
That said, JSON also seems okay and it being simpler is a good thing. Though personally JSON5 feels like it addresses a few things that some might find missing: https://json5.org/ (despite it being a non-starter for many, due to limited popularity/support)
Namespaces. I know why they were introduced, but they still were an incredible pain to use, especially with SOAP. You want to pass a <Customer> to the update method? No, it must be <Customer xmlns="http://example.com/api/customers/v2"> that is wrapped in a <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">.
Oh, you're writing a service? You can't just XPath your way to that <Customer>, because it's a namespaced <Customer>, your XML parser will claim there's no <Customer> in the message, you have to register your namespaces "http://example.com/api/customers/v2" and "http://www.w3.org/2003/05/soap-envelope" and look for /soap:Envelope/soap:Body/c:Customer instead.
JSON is annoyingly anal about its commas, but at least it has a single global namespace and I have never encountered a situation where I wished I could disambiguate between two different "customer" objects in my JSON payload.
It's not the worst ever (that would be YAML) but it does have an accumulation of annoying features.
* Elements and attributes (as you said).
* Text children mixed up with elements. These two are both good for writing documents by hand (i.e. HTML) but really annoying to process.
* Namespaces are frankly confusing. I understand them now but I didn't for years - why is the namespace a URL but there's nothing actually at that URL? 99% of the time you don't even need namespaces.
* The tooling around XML is pretty good but it's all very over-engineered just like XML.
* The syntax is overly complicated and verbose. Repeated tag names everywhere. Several different kinds of quoting.
* XML schema is nice but it would be good if there was at least some support for basic types in the document. The lack of bool attributes is annoying, and there's no standard way to create a map.
JSON is better by almost every metric. It is missing namespaces but I can't think of a single time I've needed that in JSON. Mixing up elements from different schemas in the same place is arguably a terrible idea anyway.
The only bad things about JSON are the lack of comments and trailing commas (which are both fixed by JSON5) and its general inefficiency.
The inefficiency can be sometimes solved by using a binary JSON style format e.g. CBOR or Protobuf. With very large documents I've found it better to use SQLite.
It's always strange to think that we went through formats like XML (and even earlier, XDR) before inventing something as seemingly simple and obvious as JSON.
We had S-Expressions before we had JSON. (And JavaScript originally wanted to be a Lisp, too.)
It's not that we had XML and SGML and XDR because nobody had invented something as simple as JSON, yet. The real reasons are some complicated social hodgepodge that made those complicated beasts more accepted than the already-invented simpler approaches.
> It's always strange to think that we went through formats like XML (and even earlier, XDR) before inventing something as seemingly simple and obvious as JSON.
It's my understanding that JSON was not invented. It's just the necessary and sufficient parts of JavaScript to define data structures, and could be parsed/imported in a browser with a call to eval().
People who complain about JSON completely miss the whole point. It's not that it's great or trouble-free, it's that it solved the need to exchange data with a browser without requiring any library or framework.
Forking perfectly functional browser-based web app into Electron apps is an irritating trend with very limited benefits. Some apps exist as Electron apps because they require native OS access, this app does not, therefore there is no reason to do so.
Porting to Electron would be trivial but in doing so you incur the following ramifications:
- the user has yet another instance of Chromium running on their device.
- they can't interact with browser based UIs easily any longer (bookmarking, retaining in history, copying the URL, different cookie/login jars, etc...).
- might fragment the users workflow even more if they have to interleave between electron apps and their browser
- lack of user extensions and some important accessibility features
In Chrome you can create a shortcut for the page and select "open in a new window" which by-in-large emulates the workflow you request. I'm sure there's a similar process for Firefox.
Interestingly enough, when I want the compartmentalization experience that comes from an “App” on macOS, I turn to… Microsoft Edge. Edge has a nifty little feature that lets you “Appify” a website. I mostly find this useful for company-required PWAs, and most-of-all, Microsoft Teams. The Edge-“Appified” MS Teams on macOS is leaps and bounds more performant than the “Native” (Electron) MS Teams apps on macOS (consumes ~25MB of mem vs ~800MB). Has the nice benefit of your “Apps” being a Command-Space away.
I get a lot of useful information from reading raw URLs, the exact thing they're advertising here is the thing I hate most about Jira. I even wrote a chrome extension that prevents Jira from loading smart links because I hate it so much. I can't imagine ever wanting a YouTube video preview while I'm skimming JSON data.
I can only imagine the kind of personal information or proprietary internal data that has been unwittingly transmitted due to tools like this.
If my objective was to gain the secrets of various worldwide entities, one of the first things I would do is set up seemingly innocent Pastebins, JSON checkers, online file format convertors and permanently retain all submitted data.
Yes it's a cool looking tool, but there are certslain requirements that ignorance doesn't exempt us from.
My pet gripe is all of the seemingly local (open source) tools that phone home with opt-out metrics, not mentioned in the "getting started" and take some obscure flag to disable and it's just that little bit more complex to do when running the defacto (containerised) build.
Exhibit A: DotNet! https://learn.microsoft.com/en-us/dotnet/core/tools/telemetr...
I was totally about to clone this repo and run it locally so I can play with some internal json.
And they operated for years, when someone noticed links on their own website, they haven't added themselves and tried to figure out, how it happened, because nobody else had access to the website.
(Will update with a link, if I find it.)
My tool flatterer: https://lite.flatterer.dev/ converts deeply nested JSON to csv/xlsx, is done in web assembly in the browser.
It hard to prove that it is not sending data to a server, so it can be trusted. I know people could check dev tools but that is error prone and some users may not be able to do it.
I wish there was an easy way to prove this to users as it would make online tools like this much more attractive.
If you have a tech savvy audience they can also view your thing in an iframe with only sandbox="allow-scripts" to prove that it's not making network requests.
I wrote an html/js log viewer with those security models https://GitHub.com/ljw1004/seaoflogs - it handles up to 10kline log files decently, all locally.
https://github.com/apihero-run/jsonhero-web/issues/134
Deleted Comment
For me, this is harder to use than reading the JSON in a colour text editor such as VSCode. I'm getting less information on the page, and its harder to scan, but that might be because I'm used to reading JSON.
And yes I feel the same. For me its also easier to read it either on raw form or in VsCode.
This project takes a different approach, in that it handles the displaying of JSON node leaf data in a more human way. E.g for hex colours showing a colour picker or a date picker for dates.
What sets this tool apart however is the static analysis of the JSON data, which in doing so can uncover divergences or outliers in the data. E.g a single null value somewhere, or supposedly data which deviates from the majority data-type (i.e a number where every other value is a string).
I think there's value proposition in just edge case detection. Datasets can be massive, with something like JSON there exists no formal type verification. Although to be honest, I don't see a valid reason to use JSON as a backend given graph based databases with type-safe schemas exist.
1: https://news.ycombinator.com/item?id=32626873
And, well, as much as I applaud the effort, I also think that I'll stick to my text editor for browsing JSON data and to jq for extracting data from it.
My text editor because it's easy to perfom free text search and to fold sections, and that's all that I need to get an overview.
Jq because it's such a brilliantly sharp knife for carving out the exact data that out want. Say I had to iterate a JSON array of company departments, each with a nested array of employees, and collect everyone's email. A navigational tool doesn't help a whole lot but it's a jq one liner. Jq scales to large data structures in a way that no navigational tool would ever do.
Also, there is the security issue of pasting potentially sensitive data into a website.
Looks a bit like fzf combined with jq.
Uh... It does? I remember when XML was the main data interchange format of the web. That sucked. JSON is amazing, terrific, wonderful, etc. in comparison.
I wonder why - apart from the "Should this be an element or an attribute?" issues and oddities in various implementations, XML doesn't seem like the worst thing ever.
Actually, in a web development context, I'd argue that WSDL that was used with SOAP was superior to how most people worked with REST (and how some do), since it's taken OpenAPI years to catch up and codegen is still not quite as widespread, despite notable progress: https://openapi-generator.tech/
What does leave a sour taste, however, is the fact that configuration turned into XML hell (not in a web context, but for apps locally) much like we have YAML hell nowadays, as well as people being able to focus on codegen absolved them of the need to pay lots of attention towards how intuitive their data structures are.
That said, JSON also seems okay and it being simpler is a good thing. Though personally JSON5 feels like it addresses a few things that some might find missing: https://json5.org/ (despite it being a non-starter for many, due to limited popularity/support)
Oh, you're writing a service? You can't just XPath your way to that <Customer>, because it's a namespaced <Customer>, your XML parser will claim there's no <Customer> in the message, you have to register your namespaces "http://example.com/api/customers/v2" and "http://www.w3.org/2003/05/soap-envelope" and look for /soap:Envelope/soap:Body/c:Customer instead.
JSON is annoyingly anal about its commas, but at least it has a single global namespace and I have never encountered a situation where I wished I could disambiguate between two different "customer" objects in my JSON payload.
* Elements and attributes (as you said).
* Text children mixed up with elements. These two are both good for writing documents by hand (i.e. HTML) but really annoying to process.
* Namespaces are frankly confusing. I understand them now but I didn't for years - why is the namespace a URL but there's nothing actually at that URL? 99% of the time you don't even need namespaces.
* The tooling around XML is pretty good but it's all very over-engineered just like XML.
* The syntax is overly complicated and verbose. Repeated tag names everywhere. Several different kinds of quoting.
* XML schema is nice but it would be good if there was at least some support for basic types in the document. The lack of bool attributes is annoying, and there's no standard way to create a map.
JSON is better by almost every metric. It is missing namespaces but I can't think of a single time I've needed that in JSON. Mixing up elements from different schemas in the same place is arguably a terrible idea anyway.
The only bad things about JSON are the lack of comments and trailing commas (which are both fixed by JSON5) and its general inefficiency.
The inefficiency can be sometimes solved by using a binary JSON style format e.g. CBOR or Protobuf. With very large documents I've found it better to use SQLite.
It's not that we had XML and SGML and XDR because nobody had invented something as simple as JSON, yet. The real reasons are some complicated social hodgepodge that made those complicated beasts more accepted than the already-invented simpler approaches.
It's my understanding that JSON was not invented. It's just the necessary and sufficient parts of JavaScript to define data structures, and could be parsed/imported in a browser with a call to eval().
People who complain about JSON completely miss the whole point. It's not that it's great or trouble-free, it's that it solved the need to exchange data with a browser without requiring any library or framework.
My guess is that XML is good for situations where text and data is mixed.
I can read a 10 like file without a parser. What I don't like is 7 layers deep, nested, 890 lines long.
https://altearius.github.io/tools/json/index.html
Was formerly hosted at http://chris.photobooks.com/json
I guess I could fork it myself, but don't particularly want to have to run a web app to browser JSON either.
I wonder how easy it would be to port to Electron.
Porting to Electron would be trivial but in doing so you incur the following ramifications:
- the user has yet another instance of Chromium running on their device.
- they can't interact with browser based UIs easily any longer (bookmarking, retaining in history, copying the URL, different cookie/login jars, etc...).
- might fragment the users workflow even more if they have to interleave between electron apps and their browser
- lack of user extensions and some important accessibility features
In Chrome you can create a shortcut for the page and select "open in a new window" which by-in-large emulates the workflow you request. I'm sure there's a similar process for Firefox.
edit: clarify & format
Who wants a tool they rely on to one day update with spyware, HTTP 404, or filled with ads (like Toptal did with keycode.info)?
Deleted Comment
https://marketplace.visualstudio.com/items?itemName=JSONHero...