None taken!
This is a much more developed project indeed.
I think the SQLite kernel for Jupyter has a nice foreseeable future with graph visualizations for the query results and the whole integration with conda, mamba and pip environments, though.
Amazing share, my data productivity just went up significantly
I watched a video on that site from the creator, and after trivial installs of datasette and csvs-to-sqlite, I was exploring and facetting my data like never before.
I used to waste some much time generating static report summaries of my data, and now its just 2 steps away:
- bundle csvs into a db
- datasette it
- win
You gotta re-run the query or you're limited to sorting just the visible results.
I guess I could let it spot when there are less than the page-size of results, but then I'd need to be confident that the JavaScript sorting algorithm exactly matches the underlying SQLite sorting algorithm - taking into account character sets and custom collations and suchlike.
I've been deliberately avoiding adding any JavaScript to core Datasette almost to make a point: you don't need it. HTML pages load faster than so-called "modern" SPA monstrosities. They work really well. They don't require a bunch of extra steps to avoid breaking the back button etc.
But... Datasette provides a JSON API for everything, and supports plugins. There's nothing to stop someone who really wants no-page-reload query execution from implementing it as a plugin.
Cool hack, but probably wouldn't install a separate kernel or run an entire notebook just for sqlite work.
What would be useful is for the kernel of your language of choice to provide a magic for sqlite and return results of queries in language-native data structures.
I wonder if this is closer to what you're suggesting, as it already allows sharing of simple data structures between cells of distinct languages: https://polynote.org/. One cell: SQLite kernel to get data. Next cell: operates on that data.
Yeah, I had this idea at first, but it's not trivial to run more than one kernel at the same time.
I know some examples of people who did it, and some projects that I could plug in and try to make it work with my kernel, but this was a first version to see how the community responds!
Let's see, might do something like it in the future.
I think it'd be really cool and powerful to make it interact with Python for example.
Weird as it might sound, I can imagine a scenario where I wouldn't need anything past SQL in some form. I've got plenty of python scripts sitting around my hard drive that are just fronts for data manipulation. Especially with the cvs import mode, I could see myself using just sqlite or whatever
Not really. You could use the python kernel and embed your SQL code within your python code but you couldn't run cells with just SQL. It made for a very poor analyst experience.
Does ADS support SQLite? I already use it for SQL Server, and I'd tried the PostgreSQL add-on but couldn't get it to work for some reason (but that was when it was first released so it might've stabilized since then).
Well, as someone who strongly dislikes Python and strongly likes SQLite, this might give me a reason to willingly use Jupyter (granted, I guess I could do it through Julia, too, which I also strongly like, but... eh).
A lot of the time when I need to work with a bunch of data I'll fire up the SQLite CLI, snarf the data with the built-in CSV import commands, and do whatever queries I wanna do. This looks like it can provide a nicer UI around that workflow, and I'm all for it.
Datasette looks interesting for this, too; I'll have to give that a look.
I like this. Previously I’ve used notebooks to explain a data pipeline from different perspectives. One notebook showing the ETL process going from raw data to RDF triples in a store; one SPARQL notebook showing the raw queries; and then a final decision support notebook using a Python binding to the underlying SPARQL query library. It seemed to work well.
This is awesome! Makes sql as accessible in the classroom as python—-we waste so much time installing SQLite and trouble shooting for students (b-schools). As soon as this is accessible in colaboratory (google), or similar, super light web interface and notebooks become trivial... THANK YOU!!
Which seems to me to be farther along in providing advanced querying/faceting/visualization/sharing capabilities on top of sqlite.
(I love jupyter, and this kernel seems neat; not trying to throw stones at anybody, just to link a project in a similar domain)
I think the SQLite kernel for Jupyter has a nice foreseeable future with graph visualizations for the query results and the whole integration with conda, mamba and pip environments, though.
I watched a video on that site from the creator, and after trivial installs of datasette and csvs-to-sqlite, I was exploring and facetting my data like never before.
I used to waste some much time generating static report summaries of my data, and now its just 2 steps away: - bundle csvs into a db - datasette it - win
I guess I could let it spot when there are less than the page-size of results, but then I'd need to be confident that the JavaScript sorting algorithm exactly matches the underlying SQLite sorting algorithm - taking into account character sets and custom collations and suchlike.
I've been deliberately avoiding adding any JavaScript to core Datasette almost to make a point: you don't need it. HTML pages load faster than so-called "modern" SPA monstrosities. They work really well. They don't require a bunch of extra steps to avoid breaking the back button etc.
But... Datasette provides a JSON API for everything, and supports plugins. There's nothing to stop someone who really wants no-page-reload query execution from implementing it as a plugin.
I myself have built JavaScript plugins for things like bar charts and line charts ( https://github.com/simonw/datasette-vega ) and map visualizations ( https://github.com/simonw/datasette-cluster-map )
So yeah, my personal bias here is that building websites like we did in 1997 is deeply undervalued.
What would be useful is for the kernel of your language of choice to provide a magic for sqlite and return results of queries in language-native data structures.
Something like this: https://pypi.org/project/ipython-sql/
https://orgmode.org/worg/org-contrib/babel/intro.html#meta-p...
[1] https://nbviewer.jupyter.org/github/twosigma/beakerx/blob/ma...
Congrats to the Jupyter team on this!
I realise the difference between the kernels altogether, but how is one better than the other?
no pip install anything just import sqlite
https://docs.python.org/3.7/library/sqlite3.html
But I will be trying the Jupyter kernel in the article
Microsoft got it right with Azure Data Studio.
There's also pandasql[1] which is based on SQLite. We use this behind the scenes at SeekWell[2] to power cross database / cross source joins.
0 - https://pandas.pydata.org/pandas-docs/stable/reference/api/p...
1 - https://pypi.org/project/pandasql/
2 - https://seekwell.io/
A lot of the time when I need to work with a bunch of data I'll fire up the SQLite CLI, snarf the data with the built-in CSV import commands, and do whatever queries I wanna do. This looks like it can provide a nicer UI around that workflow, and I'm all for it.
Datasette looks interesting for this, too; I'll have to give that a look.
Was adding basic online chart capabilities to our SQLite publishing website a few weeks ago (eg:
https://dbhub.io/vis/justinclift/Marine%20Litter%20Survey%20... )
But if people could do Jupyter notebooks and visualise them like this too, that could be useful.
Hmmm, should probably set up some kind of survey on our website to ask... :)