Can you say a few more words about the library https://github.com/standardebooks/tools ? Can it generate ePub3 from markdown files or do I have to feed it HTML already. Any repo with usage examples of the `--white-label` option would be nice.
Deleted Comment
Scrolling is no longer interesting, and food looks un-appetizing. Making the digital reality look boring is a good deal to make the real world look more exciting.
Thanks to comments from @jtbaker and @SkyPuncher I just added a shortcut to the "pull out" menu so I can now turn off when I need to work with pictures where colors are important.
The website has all the notebooks from the book, as well as well as the complete tutorials on the tech stack (Python, Pandas, Seaborn).
For everyone interested, check out the extended preview PDFs:
- Part 1: DATA and PROBABILITY https://minireference.com/static/excerpts/noBSstats_part1_pr...
- Part 2: STATISTICAL INFERENCE https://minireference.com/static/excerpts/noBSstats_part2_pr...
Doesn't just (Y @ X)[None] work? None adding an extra dimension works in practice but I don't know if you're "supposed" to do that
(Y @ X)[None]
# array([[14, 32, 50]])
but `(Y @ X)[None].T` works as you described: (Y @ X)[None].T
# array([[14],
# [32],
# [50]])
I don't know either RE supposed to or not, though I know np.newaxis is an alias for None.It's because in Python 1-dimensional arrays are actually a thing, unlike in Matlab. That line of code is a non-example; it is easier to make it work in Python than in Matlab.
To make `Z` a column vector, we would need something like `Z = (Y @ X)[:,np.newaxis]`.
Although, I'm not sure why the author is using `concatenate` when the more idiomatic function would be stack, so the change you suggest works and is pretty clean:
Z = Y @ X
np.stack([Z, Z], axis=1)
# array([[14, 14],
# [32, 32],
# [50, 50]])
with convention that vectors are shape (3,) instead of (3,1).> # or rely on broadcasting rules carefully.
> Z = Y @ X.reshape(3, 1)
Why not use X.transpose()?
This seems to work,
Z = Y @ X[:,np.newaxis]
thought it is arguably more complicated than calling the `.reshape(3,1)` method.Deleted Comment
The first time I tried chatgpt that was the thing that surprised me most, the way it understood my queries.
I think that the spotlight is on the "generative" side of this technology and we're not giving the query understanding the deserved credit. I'm also not sure we're fully taking advantage of this funcionality.
I've tried several times to understand the "multi-head attention" mechanism that powers this understanding, but I'm yet to build a deep intuition.
Is there any research or expository papers that talk about this "understanding" aspect specifically? How could we measure understand without generation? Are there benchmarks out there specifically designed to test deep/nuanced understanding skills?
Any pointers or recommended reading would be much appreciated.