Readit News logoReadit News
mattj commented on Waymo One is now open to all in Los Angeles   waymo.com/blog/2024/11/wa... · Posted by u/ra7
doctorpangloss · 9 months ago
> Right now it's the most exciting tourist attraction in San Francisco.

I'm as excited as you about self driving cars, but the vehicles are essentially driven remotely, maybe not by live video feed but nonetheless by remote operators, in the sense that matters. It is more tourist attraction than it is end-all be-all of technology, which was very deflating to find out.

All I am really saying Simon is that you are a highly educated guy: adopt a more nuanced take on this. According to the people I know who work in this space and are not trying to raise money for autonomous vehicles, and according to many journalists, there is a lot of consensus that Waymo has adopted a remote driving scheme that is working, and perhaps that is why they are operating a taxi service and others are not. It isn't clear if that growth story, as great as it is, will help them raise enough money to invent truly autonomous vehicles.

mattj · 9 months ago
mattj commented on DIY Espresso (2020)   fourbardesign.com/2020/10... · Posted by u/timvdalen
fodkodrasz · 2 years ago
The site would definitely need a rework based on the input from someone totally unfamiliar with the project.

I cannot know from the opening page, what coffee machines the mod is for, but I can see that an STM32 or an arduino nano and some other parts would be needed.

Not really informative, no pictures on the opening page to show something like before/after, to showcase the benefits, etc.

Yet there is a rickroll...

This is not really sympathetic, but yeah, I can see the begging icon (support paying for a professional technical writer)... but this is also really backwards in my opinion. First you should sell the project to me, then ask for donation.

While the engineering contents might be great, the presentation is very low quality, to put it politely.

mattj · 2 years ago
I've built one of these based on the instructions and use it daily (and love it!), but the tone of the site is pretty reflective of the project overall. It's definitely a really impressive hack and I appreciate all the hard work that's gone into it, but could really use a little more user-facing empathy.

I'm not sure if I'd recommend it to someone else - and if I was doing it again I'd probably spend a few hundred more (than the gaggia + parts cost) and just buy an off-the-shelf machine with the same feature set.

mattj commented on Nearly 12M Square Feet of Vacant Office Space in S.F   socketsite.com/archives/2... · Posted by u/kyleblarson
meddlepal · 5 years ago
I feel like comments like this often lack perspective. Upstate NY, especially Buffalo and Rochester have plenty of things to do in them and far more than five restaurants.

It strikes a nerve with me because in Massachusetts a similar sentiment is expressed by Boston-Cambridge folks about Lowell, Worcester, and other periphery cities. Its simply mot true that these places are devoid of culture snd cuisine and in the age of Yelp and Google Maps it takes almost no time to find it.

mattj · 5 years ago
I think the grandparent is probably referring to upstate in the Hudson Valley sense of the phrase. Plenty of cute towns, but Hudson / Woodstock / Kingston are definitely in the O(tens) of great restaurant options, comparable to a slice of any single Manhattan / Brooklyn neighborhood most tech people live in.
mattj commented on Attention Is All You Need   papers.nips.cc/paper/7181... · Posted by u/espeed
RangerScience · 8 years ago
I'm seconding this. I could not find a good resource to understand what "attention" actually _is_.

(The next step for me would be to follow the citation trail to the original paper, but that might not be the best place to come to an understanding of the thing.)

mattj · 8 years ago
The other answers cover the math well, but I think the “why do you need attention?” statement is worth making (and answers the more engineering-y question of “how/when?”):

DNNs typically operate on fixed-size tensors (often with a variable batch size, which you can safely ignore). In order to incorporate a non-fixed size tensor, you need some way of converting it into a fixed size. For example, processing a sentence of variable length into a single prediction value. You have many choices for methods of combining the tensors from each token in the sentence - max, min, mean, median, sum, etc etc. Attention is a weighted mean, where the weights are computed based on a query, key, and value. The query might represent something you know about the sentence or the context (“this is a sentence from a toaster review”), the key represents something you know about each token (“this is the word embedding tensor”), and the value is the tensor you want to use for the weighted mean.

mattj commented on Show HN: Kozmos – A Personal Library   getkozmos.com/... · Posted by u/_fwu1
mattj · 8 years ago
This is great - the like / heart button is really slick, and I love how it doesn't get in the way at all. I've used pinboard and others in the past, and the (relatively) heavier bookmarking flow would often stop me from saving things as I didn't want to break my flow.

Excited to see where this ends up!

mattj commented on Most Winning A/B Test Results Are Illusory [pdf]   qubit.com/sites/default/f... · Posted by u/maverick_iceman
ted_dunning · 9 years ago
This is yet another article that ignores the fact that there is a MUCH better approach to this problem.

Thompson sampling avoids the problems of multiple testing, power, early stopping and so on by starting with a proper Bayesian approach. The idea is that the question we want to answer is more "Which alternative is nearly as good as the best with pretty high probability?". This is very different from the question being answered by a classical test of significance. Moreover, it would be good if we could answer the question partially by decreasing the number of times we sample options that are clearly worse than the best. What we want to solve is the multi-armed bandit problem, not the retrospective analysis of experimental results problem.

The really good news is that Thompson sampling is both much simpler than hypothesis testing can be done in far more complex situations. It is known to be an asymptotically optimal solution to the multi-armed bandit and often takes only a few lines of very simple code to implement.

See http://tdunning.blogspot.com/2012/02/bayesian-bandits.html for an essay and see https://github.com/tdunning/bandit-ranking for an example applied to ranking.

mattj · 9 years ago
I agree with you (and love your blog, btw), but I think you're skipping over at least a few benefits you can get out of a mature / well built a/b framework that are hard to build into a bandit approach. The biggest one I've found personally useful is days-in analysis; for example, quantifying the impact of a signup-time experiment on one-week retention. This doesn't really apply to learning ranking functions or other transactional (short-feedback loop) optimization.

That being said, building a "proper" a/b harness is really hard and will be a constant source of bugs / FUD around decision-making (don't believe me? try running an a/a experiment and see how many false positives you get). I've personally built a dead-simple bandit system when starting greenfield and would recommend the same to anyone else.

mattj commented on Wide and Deep Learning: Better Together with TensorFlow   research.googleblog.com/2... · Posted by u/hurrycane
ninjin · 9 years ago
I am about to catch a flight, so I am unable to do anything better than skimming the post and paper. But isn't this just the good old feature embeddings coupled with learnt features that have been around for several years by now?
mattj · 9 years ago
I think the change here is they're learning the embeddings alongside the feature weights (eg they're part of the same loss function).
mattj commented on Twilio S-1   sec.gov/Archives/edgar/da... · Posted by u/kressaty
stanmancan · 9 years ago
This is the first time I've ever looked through an S-1 before, but in the Risks section they say:

    We have a history of losses and we are uncertain about our future profitability
Is it normal to go public when being uncertain if you'll ever be profitable?

mattj · 9 years ago
This kind of language is very standard. The risks section pretty much always contains obvious platitudes ("An earthquake might destroy all our computers," "All our employees may quit").
mattj commented on Ask HN: How do I get better at CSS?    · Posted by u/Catalyst4NaN
TheLem · 10 years ago
In my opinion it goes like this :

- Make a sketch of the design, form positioning or interface you want for a certain page.

- Try to translate this design to a more comprehensible form for your interpreter (I mean writing CSS lines).

- Long cycle of try and error, reading stackoverflow, testing, reading snippets.

Practice this for a certain amount of time (be patient!) you will find yourself a world class CSS "writer". The main basis is moving from a sketch to CSS script.

mattj · 10 years ago
Similar experience, but I focused on finding UI elements I liked from native apps or websites and attempted to clone them without looking at the source, then played around with the result to figure out how I could simplify it, how it behaved cross browser, etc etc.
mattj commented on Introducing Progressive Equity – Increase employee ownership as company grows   blog.detour.com/introduci... · Posted by u/andrew
cyrusradfar · 10 years ago
Ah, great minds think alike. We were doing the same thing at the same time. I backtested the process against Facebook's IPO so it could feel a bit more real

http://kapuno.com/conversation/bblc6nqbe6qte

mattj · 10 years ago
heads up, your math is a little buggy - 1% of $90b is $900m, not $90m

u/mattj

KarmaCake day357April 16, 2008
About
doing ml stuff since 2008
View Original