Readit News logoReadit News
conjectureproof commented on All Circuits are Busy Now: The 1990 AT&T Long Distance Network Collapse (1995)   users.csc.calpoly.edu/~jd... · Posted by u/hexbus
EGreg · 3 years ago
Should have used Q !
conjectureproof · 3 years ago
Why?

I briefly had an interest in learning Q, then looked at some code: https://github.com/KxSystems/cookbook/blob/master/start/buil...

Why not just build what you need with C/arrow/parquet?

Deleted Comment

conjectureproof commented on Ask HN: Do you have any experience asking to be down-leveled at $Corporate_Job?    · Posted by u/zwkrt
conjectureproof · 3 years ago
Could consider offering to be hired as a consultant.

Pete Muller said of his quants, "I want their shower time because in the shower they are thinking about things that get them to solve the problems." Put another way, in certain roles >50% value is created in 20% office time spent crystalizing ideas developed outside the office.

If your team can capture the >50% value of 20% time for 20% comp, that's a huge win. Assumes your work is qualitatively different from a new hire's. Not "more widgets with fewer defects in less time". If latter, company may reasonably prefer new hire who can put in the 50 hour slog. And you will too, it's no fun working as a time-metered widget-making FTE stand in.

conjectureproof commented on Ask HN: What have you created that deserves a second chance on HN?    · Posted by u/paulgb
doersino · 3 years ago
A blog post on how to make your Bash history more useful: https://excessivelyadequate.com/posts/history.html
conjectureproof · 3 years ago
Neat.

Simple way to track time spent on projects that is resilient to user forgetfulness. Much better than collecting timestamps from git commits. Could be interesting to merge with git history and measure how productivity (some combo of bash activity and git activity and lines-of-code/Kolmogorov-complexity) change with time-of-day, season, weather, etc.

# store timestamps

export HISTTIMEFORMAT="%F %T "

# append immediately (affects timestamps?)

PROMPT_COMMAND="$PROMPT_COMMAND; history -a"

# do not limit .bash_history file size

export HISTFILESIZE=""

# append mode (collect from multiple shells)

shopt -s histappend

# multi-line commands as single record

shopt -s cmdhist

conjectureproof commented on Ask HN: What are the foundational texts for learning about AI/ML/NN?    · Posted by u/mfrieswyk
conjectureproof · 3 years ago
+1 on Elements of Statistical Learning.

Here is how I used that book, starting with a solid foundation in linear algebra and calculus.

Learn statistics before moving on to more complex models (neural networks).

Start by learning ols and logistic regression, cold. Cold means you can implement these models from scratch using only numpy ("I do not understand what I cannot build"). Then try to understand regularization (lasso, ridge, elasticnet), where you will learn about the bias/variance tradeoff, cross-validation and feature selection. These topics are explained well in ESL.

For ols and logistic regression I found it helpful to strike a 50-50 balance between theory (derivations and problems) and practice (coding). For later topics (regularization etc) I found it helpful to tilt towards practice (20/80).

If some part of ESL is unclear, consult the statsmodels source code and docs (top preference) or scikit (second preference, I believe it has rather more boilerplate... "mixin" classes etc). Approach the code with curiosity. Ask questions like "why do they use np.linalg.pinv instead of np.linalg.inv?"

Spend a day or five really understanding covariance matrices and the singular value decomposition (and therefore PCA which will give you a good foundation for other more complicated dimension reduction techniques).

With that foundation, the best way to learn about neural architectures is to code them from scratch. Start with simpler models and work from there. People much smarter than me have illustrated how that can go: https://gist.github.com/karpathy/d4dee566867f8291f086https://nlp.seas.harvard.edu/2018/04/03/attention.html

While not an AI expert, I feel this path has left me reasonably prepared to understand new developments in AI and to separate hype from reality (which was my principal objective). In certain cases I am even able to identify new developments that are useful in practical applications I actually encounter (mostly using better text embeddings).

Good luck. This is a really fun field to explore!

conjectureproof commented on Breaking RSA with a quantum computer?   schneier.com/blog/archive... · Posted by u/barathr
Strilanc · 3 years ago
I want to contrast this paper with Shor's factoring paper [1].

One of the things that stands out to me about Shor's paper is how meticulous he is. He is considering the various ways the algorithm might fail, and proving it doesn't fail in that way. For example, the algorithm starts by picking a random seed and you can show that some choices of seed simply don't work. He proves a lower bound on how many have to work. Also, given a working seed, sometimes the quantum sampling process can correctly return a useless result. He bounds how often that can occur as well. He never says "I think this problem is rare so it's probably fine", instead he says "this problem is at least this rare therefore it is fine". Essentially the only real problem not addressed by the paper was that it required arbitrarily good qubits... so he went and invented quantum error correction [2].

The paper being discussed here [3] does not strike me as meticulous. It strikes me as sloppy. They are getting good numbers by hoping potential problems are not problems. Instead of addressing the biggest potential showstoppers, they have throwaway sentences like "It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA".

How many shots are needed for each sample point fed into the classical optimization algorithm? How many steps does the optimization algorithm need? How do these scale as the problem size is increased? How big are they for the largest classically simulable size (RSA128 with 37 qubits according to their table)? These are absolutely critical questions!... and the paper doesn't satisfyingly address them.

Is there somewhere where I can bet money that this doesn't amount to anything?

1: https://arxiv.org/abs/quant-ph/9508027

2: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.R2...

3: https://arxiv.org/abs/2212.12372

conjectureproof · 3 years ago
Lenny Baum, Lloyd Welch, and their colleagues at IDA were using the EM algorithm for code cracking well before they were able to prove anything about its convergence.

EM worked in practice, so they spent a long time trying to prove convergence. Modern proofs are simpler.

Could be the case that this method also works in practice. I haven't the faintest idea whether it will.

u/conjectureproof

KarmaCake day13January 4, 2023View Original