Readit News logoReadit News
zgao commented on Launch HN: Blyss (YC W23) – Homomorphic encryption as a service    · Posted by u/blintz
zgao · 2 years ago
Hey, tangentially- I am CEO of Fabric, a company building orders of magnitude faster hardware accelerators for next-gen cryptography on the latest fab technologies.

Would love to share notes if you're up for it!

zgao commented on Launch HN: Neptyne (YC W23) – A programmable spreadsheet that runs Python    · Posted by u/dosinga
zgao · 3 years ago
Founder of AlphaSheets here -- we built this back in 2015 and developed it for 3 years. We built Python, R, SQL and full excel formula/hotkey/format/conditional formatting/ribbon compatibility. It was a long slog!

I wish you good luck and all the best. It's a tough field but a big market. And I still think the potential is there.

zgao commented on Peter Thiel's favorite thinker: Rene Girard   memod.com/jashdholani/pet... · Posted by u/patrickjmc
zgao · 4 years ago
I always wondered what special genius people saw in Girard.

The observation that people want what each other want is not new and doesn't require philosophical genius to observe -- "keeping up with the Joneses" is what it's called by normal people.

What about "avoiding competition is good so chase blue oceans?" That's certainly a decent fund thesis, sure. Is it a genius one? Certainly seems like the returns come from the application of the maxim and not the maxim itself.

The real insight would've been to propose a fun way out of this "mimetic hell" for society. Girard's observation is that this usually takes violence against a scapegoat -- certainly not a fun way out.

zgao commented on Chip design drastically reduces energy needed to compute with light   news.mit.edu/2019/ai-chip... · Posted by u/rbanffy
deepnotderp · 6 years ago
Fathom doesn't do computing optically.
zgao · 6 years ago
They started out doing it optically, then moved away from it. I heard about it in 2016, so I know.
zgao commented on Chip design drastically reduces energy needed to compute with light   news.mit.edu/2019/ai-chip... · Posted by u/rbanffy
thatcherc · 6 years ago
Does the paper propose building these devices in free space? I got the sense this was all intended to be produced lithographically with waveguides, so alignment wouldn't be a problem.
zgao · 6 years ago
The title is wrong about it being an integrated design. Here's an excerpt from the paper abstract:

"This paper presents a new type of photonic accelerator based on coherent detection that is scalable to large (N≳106) networks and can be operated at high (gigahertz) speeds and very low (subattojoule) energies per multiply and accumulate (MAC), using the massive spatial multiplexing enabled by standard free-space optical components"

zgao commented on Chip design drastically reduces energy needed to compute with light   news.mit.edu/2019/ai-chip... · Posted by u/rbanffy
zgao · 6 years ago
Caveat: the paper mainly focuses on the "standard quantum limit" which is the fundamental photon energy needed for the operations. If other things are taken into account (for example, modulation energy for the weights in this homodyne scheme, which scales with N² and not N, or the limits of the ADC) then the energy they are proposing is nowhere near possible. Furthermore, substantial alignment and packaging problems exist for free space optical systems, which prevents them from beating integrated approaches in the near term. In fact, it seems that Fathom Computing has potentially pivoted away from free space, based on the latest verbiage on their website, and they've been trying to get it to work for 3 years now.

However, it still presents an interesting case for the fact that the fundamental floor on optical scaling is absolutely tiny. It'll be interesting to see who wins in this space :)

zgao commented on Perceptrons from memristors   arxiv.org/abs/1807.04912... · Posted by u/godelmachine
orbifold · 7 years ago
Can you please give a reference for the ~250fj MAC?
zgao · 7 years ago
You can go to Groq.com -- that startup claims to have 125fj per flop (and each mac is two flops thanks to marketing logic). Started by 8 out of 10 founding TPU team members.
zgao commented on Perceptrons from memristors   arxiv.org/abs/1807.04912... · Posted by u/godelmachine
zgao · 7 years ago
I don't mean to be too negative here, but this is hardly a new development, so can someone clarify the novelty in this paper? Neural nets have been extensively demonstrated in memristor-based architectures [1] and several memristor-based training architectures have previously been proposed and tested [2]. The abstract's claim that "no model for such a network has been proposed so far" is prima facie blatantly false.

In any case, I have yet to see a conclusive, publicly explained solution to the significant system-level problems with memristor-based neural architectures, or indeed any analog neural architecture. The best claimed digital architectures are around ~250 fJ per multiply-and-accumulate (MAC) [Groq], and these generally involve 8-bit multiplication, which is extremely expensive in the analog domain thanks to the exponential scaling of power with precision levels. Even if you set aside the monstrous fabrication and device-level variance issues with memristors, DAC and ADC consume tens of pJ per sample in the realistic IP blocks that are commercially available. Although only one pair of DAC and ADC operations is required per dot product, this is still 40 fJ per MAC from DAC and ADC alone, assuming a 256x256 matrix multiplication and not taking other system-level issues into account. This limits memristors to a 5x over current digital architectures, and as nodes shrink, by the time memristors come out, this will be around a 3x. While a 3x is considerable, I don't think it justifies the moonshot-level deep tech risk that memristors will continue to represent. Many hardware companies [Tabula...] have failed attempting to reach something like a 3x in the main figure-of-merit, only to find that system-level issues get them a 1x instead. Besides, I'm sure digital architectures have more than 3x room for improvement- plenty of tricks left for digital!

I'm hoping for a breakthrough, because I am fundamentally an optimist, but memristors have been failing to deliver since 2008.

[1] http://www.cs.utah.edu/~rajeev/pubs/isca16-old.pdf [2] https://ieeexplore.ieee.org/document/7010034/

zgao commented on Deep image prior 'learns' on just one image   dmitryulyanov.github.io/d... · Posted by u/singularity2001
cs702 · 8 years ago
Thanks. I'm not mixing them up! I'm just wondering whether and to what degree architecture, i.e., network structure, will prove important for other, more advanced AI tasks, including up to AGI.
zgao · 8 years ago
Although probably not sufficient for AGI, network architecture is essentially guaranteed to be important, because of both ample empirical evidence of the importance of architectures and ample reason, from facts about numerics, to believe that it is important.

In the first category (empirical evidence),

- The discrete leap from non-LSTM RNN to LSTM network performance on NLP was essentially due to a "better factoring of the problem": breaking out the primitive operations that equate to an RNN having "memory" had a substantial effect on how well it "remembered."

- The leap in NMT from LSTM seq2seq to attention-based methods (the Transformer by Google) is another example. Long-distance correlations made yet another leap because they are simply modeled more directly by the architecture than in the LSTM.

- The relation network by DeepMind is another excellent example of a drop-in, "pure" architectural intuition-motivated replacement that increased accuracy from the 66% range to the 90% range on various tasks. Again, this was through directly modeling and weight-tying relation vectors through the architecture of the network.

- The capsule network for image recognition is yet another example. By shifting the focus of the architecture from arbitrarily guaranteeing only positional invariance to guaranteeing other sorts, the network was able to do much better at overlapping MNIST. Again, a better factoring of the problem.

These developments all illustrate that picking the architecture and the numerical guarantees baked into the "factoring" of the architecture (for example, weight tying, orthogonality, invariance, etc.) can have and has had a profound effect on performance. There is no reason to believe this trend won't continue.

In fact, there are some very interesting ways to think about the principles behind network structure -- I can't say for sure that it has any predictive power yet, but types are one intuitively appealing way to look at it: http://colah.github.io/posts/2015-09-NN-Types-FP/

zgao commented on Finance Pros Say You’ll Have to Pry Excel Out of Their Cold, Dead Hands   wsj.com/articles/finance-... · Posted by u/triplee
d--b · 8 years ago
Finance dev guy here.

Excel's dominance in the field is because it is an _application container_ that _non_ dev people can use.

The workflow is this:

- old trader guy says to his junior guy: "hey can you look into xxx."

- junior trader guy says: "sure I'll make a spreadsheet for it"

- old trader guy: "great your model is all I need, let's trade"

- several weeks later, IT guy says: "hey you're running a $100m book out of a spreadsheet, we'll make you a nice system for it, cause your stuff will blow up."

- several months later the IT guy comes back with a web app that does the same thing as the spreadsheet.

- old trader guy says: "hey I can't copy shit around, my shortcuts aren't working, I need to be able to do basic maths on the side, I can't save my work, etc."

- IT guy: "ok I'll make you an export-to-Excel button"

Seriously I've seen this happen over and over again.

The issue is not how to get rid of Excel, it's how do we make a better spreadsheet...

zgao · 8 years ago
Exactly this.

Shameless plug: I am a founder of AlphaSheets, a company working on solving all of these issues. It's quite scary (building a spreadsheet is like boiling an ocean) but our mission feels very meaningful, we're well-funded, and we are now stable and serving real users.

A big problem in finance workflows is that there is a tradeoff between several factors: correctness, adoption / ease-of-use, rapid prototyping, and power. We aim to solve several of these major problems. We've built a real-time collaborative, browser-based spreadsheet from the ground up that supports Python, R, and SQL in addition to Excel expressions.

Correctness is substantially addressed, because you don't need to use VLOOKUP or mutative VBA macros anymore. Your data comes in live, and you can reference tables in Python as opposed to individual cells. A lot of operational risk goes away as well, because the AlphaSheets server is a single source of truth.

We help with adoption of Python and adoption of correct systems as well. You can gradually move to Python in AlphaSheets -- many firms are trying to make a "Python push" and haven't succeeded yet because the only option is to move to Jupyter and that's too much of a disruption. It's less brittle than Excel. The important keyboard shortcuts are there.

And finally, the entire Python ecosystem of tools (pandas, numpy, etc.) and all of R is available, meaning that many pieces of functionality that had to be painstakingly built in-house in VBA and pasted around are simply available out of the box in well-maintained, battle-tested packages.

Our long term plan is to broaden our focus into other situations in which organizations are outgrowing their spreadsheets. We think there's a lot of potential with the spreadsheet interface but the Excel monopoly has prevented meaningful innovation from happening. For example, every BI solution tries to be "self-serve" and "intuitive" these days, but encounters resistance from users who end up sticking with spreadsheets due to their infinite flexibility and immediate familiar appeal.

We hope to bring the spreadsheet in line with the realities of the requirements of the modern data world -- big data, tabular data, the necessity of data cleaning, data prep / ETL, the availability of advanced tooling (stats, ML), better charting -- because we think there's a giant market of people waiting to move to a modernized but familiar spreadsheet.

If there's anyone interested, contact me, because I'd be very interested in chatting! I'm michael at alphasheets dot com :)

u/zgao

KarmaCake day62October 11, 2014View Original