Readit News logoReadit News
hmartiros commented on Ask HN: Who is hiring? (November 2018)    · Posted by u/whoishiring
hmartiros · 7 years ago
Skydio | Software Engineer - Skills SDK | Redwood City, CA | ONSITE VISA | Full-time

Skydio makes the most advanced autonomous flying robots in the world today, and have an unprecedented opportunity to put autonomous UAVs to work at scale. Come design and build public-facing APIs that drive the growth and development of the Skydio Autonomy Platform, from early beta into product maturity and widespread adoption. You will define and lead technical development APIs for autonomous drones targeted towards enterprise and consumer markets. Funding 70M+.

Looking for: High proficiency in Python, Javascript, C++. Experience designing, building, and managing substantial software APIs. Strong communication skills, interface effectively with internal engineering team and external partners. Experience with and understanding of full-stack software principles; Security protocols, performance analysis, cloud compute resource management, documentation. Strong analytical skills; Vector Math, 3D Rotations, Basic Linear Algebra and Calculus. Knowledge and experience with robotics systems, specifically with UAVs. Passion for flight.

IEEE on SDK: https://bit.ly/2JCpsIx

Coverage: https://www.skydio.com/press/

recruiting@skydio.com

hmartiros commented on Empiricism and the limits of gradient descent   togelius.blogspot.com/201... · Posted by u/togelius
vinn124 · 8 years ago
> Many losses which don't seem differentiable can be reformulated as such...

agreed, especially with policy gradients.

> If the dimensionality is small, second-order methods (or approximations thereof) can do dramatically better yet.

i have not seen second order derivatives in practice, presumably due to memory limitations. can you point me to examples?

hmartiros · 8 years ago
They aren't common in deep learning, but if you look to estimation problems like odometry, optimal control, and calibration, the typical approach is to build a least squares estimator that optimizes with a gauss-newton approximation to the Hessian, or other quasi-newton methods. Gradient descent comparatively exhibits very slow convergence in these cases, especially when there is a large condition number. In the case of an actual quadratic loss function, it can (by definition) be solved in one iteration if you have the Hessian. However, getting it efficiently within most learning frameworks is difficult, as they primarily only compute VJPs or HVPs.
hmartiros commented on Continuous Domain Game of Life in Python with Numpy   github.com/duckythescient... · Posted by u/mpweiher
hmartiros · 8 years ago
This is really neat, thanks for sharing.
hmartiros commented on Empiricism and the limits of gradient descent   togelius.blogspot.com/201... · Posted by u/togelius
hmartiros · 8 years ago
In my experience if you have even a little smoothness in your problem's cost manifold, taking advantage of gradients is invaluable to sample efficiency. Many losses which don't seem differentiable can be reformulated as such - you can look around and see a wide array of algorithms being put into end-to-end learned frameworks. If the dimensionality is small, second-order methods (or approximations thereof) can do dramatically better yet. However, I'm also a fan of evolutionary algorithms. I see no reason why evolutionary rules can't be defined with awareness of gradient signals.

u/hmartiros

KarmaCake day12May 28, 2018
About
autonomy @ skydio
View Original