Over time, this concept will find its way into many projects. Especialy into open source projects which try to make the web a better place.
For example, as soon as browsers support a DNS based on cryptographic proofs like ENS, other technologies which have URL based identities (like ActivityPub) will automatically support cryptographic identities. Which would bring it to projects like Mastodon and Bluesky automatically.
They tried the same with Bing and failed. Bing did not see any uptick even with the new AI features:
https://gs.statcounter.com/search-engine-market-share/deskto...
My takeaway is that users are not as stupid and lazy as Microsoft thinks they are. They want a clean UI. And if they have to sign up for ChatGPT to get it, they will. ChatGPT already gained as many users as Bing has. With a clean UI, Bing might have already doubled it's userbase.
Why doesn't Microsft just remove all the clutter from Bing and enjoy the benefits of having their search engine being the default on all the Windows tablets and laptops out there? Why waste it all and make every user change their search engine to Google?
Which is a map of all authors in the world sorted by overlap in readership. I found some of my favorite writers by browsing it.
I wonder which approach is better suited to find something that is spot on to my interests.
When I think of my favorite books, they usually are the most popular books of their authors.
Are there any counterexamples, where an author wrote a book that is more profound than their biggest hit but got overlooked for some reason?
https://levelup.gitconnected.com/python-performance-showdown...
for poly in polygon_subset:
if np.linalg.norm(poly.center - point) < max_dist:
close_polygons.append(poly) for poly in polygon_subset:
if np.linalg.norm(poly.center - point) < max_dist:
close_polygons.append(poly)
I don't think the entire feature set of the Python runtime is involved in this. def find_close_polygons(
polygon_subset: List[Polygon], point: np.array, max_dist: float
) -> List[Polygon]:
close_polygons = []
for poly in polygon_subset:
if np.linalg.norm(poly.center - point) < max_dist:
close_polygons.append(poly)
return close_polygons
And after replacing it with this Rust code, it is taking "an avg of 23.44ms per iteration": use pyo3::prelude::*;
use ndarray_linalg::Norm;
use numpy::PyReadonlyArray1;
#[pyfunction]
fn find_close_polygons(
py: Python<'_>,
polygons: Vec<PyObject>,
point: PyReadonlyArray1<f64>,
max_dist: f64,
) -> PyResult<Vec<PyObject>> {
let mut close_polygons = vec![];
let point = point.as_array();
for poly in polygons {
let center = poly
.getattr(py, "center")?
.extract::<PyReadonlyArray1<f64>>(py)?
.as_array()
.to_owned();
if (center - point).norm() < max_dist {
close_polygons.push(poly)
}
}
Ok(close_polygons)
}
Why is the Rust version 13x faster than the Python version?
https://web.archive.org/web/20220906224326/https://twitter.c...