On a visibility graph you can run any search algorithm. The key thing is this graph is sparse (the number of nodes is really limited by the amount and complexity of the obstacles, not the distances).
This algorithm follows almost perfectly your outlined steps. In particular it draws a line to the destination, and if it encounters an obstacle it "walks" paths (generates successors for A*) along the exterior of an obstacle until it can "See" (draw a straight line to) the destination or another obstacle. By using "obstacle free distance to destination" as a heuristic at each node, this will provide optimal paths.
You can, just from the wikipedia page, deduce a good response to the problem of "select the optimal point for bypassing the obstacle". (Maybe try the vertices of the convex hull / AABB - and don't worry, it may take a few iterations to truly wrap around an obstacle)
This will perform much, much better on sparse environments than a grid, because a grid has to iterate O(n) for n grid cells, while a visibility graph has to iterate O(n) for n obstacles (of a given complexity).
Ah, I did hear about the Visibility Graph through AI.
I didn’t fully understand it due to my lack of knowledge.
But with you bringing it up again, I think I should look into that as well.
Thank you for your kind response.
Is there a nice way to handle the idea that a creature might have a better or worse memory or ability to build a model of the environment? That seems like it would be an interesting dimension to add to a creature.
Sure, if you held a deadline to my head and told me to do it, I'd just include obstacles that are "within their memory".
Expire them by time, refresh them by range (can see as they move). Constantly replan and I bet you'd get something reasonable looking, but _only_ if you add an obstacle that represents only what the creature can see. If you add the whole obstacle (regardless of what it can see), it'll just do the optimal thing, which is fine but may not "look right". So clip visibility with obstacle, add that, and union it with all known obstacles, then replan. Keep it fast by doing a union "in place" with known obstacles so your obstacle list doesn't grow unbounded.
You can imagine it would walk toward the goal until it sees a wall, then it would go either left or right for a step, then back for two steps, then left for 4 steps, then back for 8 ... because the A* "frontier" keeps expanding so it keeps searching along that frontier.
And if you're lucky, you just discovered the optimal search variant of the "Drunkards walk" search problem.
The core optimization you're trying sounds fairly similar to JPA*, which I believe is fantastic on open maps but eh on dense ones.[0]
Maybe take a look at HPA* (hierarchal A* - partition the map, pathfind on high-level map, then pathfind at lower granularity).
You can also encode into the hierarchy information about whether a rabbit exists in the chunk in the first place, to reduce the initial search for nearest-rabbit.
Factorio had a good blog post on it [1], and Rimworld too but it also enabled arbitrary-sized partitions. [2]
I'm kind of just guessing based on your basic description though; what's the full scenario in mind?
First, I understand that the JPA (Jump Point Search) family only works efficiently in static environments.
This means it requires preprocessing to achieve high efficiency.
What I'm aiming for, however, is a real-time scenario where such preprocessing is not strictly necessary. I want to implement something akin to how many creatures with human-like vision navigate using only partial information, just as humans do in real-life situations.
Currently, trees are appearing and disappearing dynamically.
In the future, such situations are expected to occur more frequently, so I am aiming to create a lightweight solution that can handle these changes in real-time.
As you mentioned, for cases like rabbits, their location information is already preprocessed and divided by zones.
Since this is a small task, it doesn't impose a significant burden on the system.
This information is intended to be used when wolves are searching for rabbits.
Additionally, I have recently been considering processing even smaller zones than the current ones to handle the vision of wolves more effectively.
It seems like this would do OK for open areas with a few simple obstructions. I'm not sure how it would do if there were complex obstructions. For instance, it's unclear how it would work if you were trying to pathfind through a maze.
It would do just fine - but that's because OP is essentially grabbing on the intuition behind Visibility Graph search, which is (or was) a very common path planning algorithm using just obstacle boundaries.
The algorithm in a maze would just use "intersections" as successors, rather than "Next next step down the hall".
This is, in fact, an optimal search algorithm for this problem, and scales much better than any grid search in this case.
@johnfn
The approach I’m considering would be similar to how humans navigate: using only visible information to continuously infer and find the way in real-time.
It would essentially break down the flow of how humans navigate into small, incremental steps.
Look with their eyes, make a judgment, move, and repeat the process again and again...
Of course, trial and error could also occur.
This might actually make it feel more natural.
I was thinking this too, A* is a good general performance algorithm, it’s possible the poster found an algorithm that performs better on their use case, but doesn’t generalise as well as A*, custom path finding algorithms that take advantage of domain knowledge are pretty common!
The difficulty has always been in how many things you look at. Tiles or objects. With static maps you can quite easily structure things so that you don't have to look at very much at all. When things are dynamic tests such as 'look at the thing in the way' suddenly get harder because you have to determine for any given point A) is something in the way? and B) what is it?
The simplest and most memory hungry thing would be to have a tile map where every tile has a precalculated map on the direction to go to reach every other tile. If you used that as a basis for optimisation you can rapidly improve, for instance an easy first step is to not store any direction data for areas where the path in a straight line is optimal.
Optimize more and you can reduce the scale of data and compute required to calculate the map you can eventually calculate the map quicker and often for only the start and end points you care about. Follow those optimisations far enough and you end up with many of the algorithms used today.
If you have a lot of things moving in a dense area, often the optimal is to maintain a bitmap of 'Something is here' making presence detection trivial.
If you then do a 8 passes over that bitmap for each direction you can go (for square tiles) you can create 8 further maps with a count recording when that pass last saw an obstruction. That lets you implement a very fast jump point search where instead of scanning tile by tile you can jump to the next obstruction in order to either find the way around or find the dead end.
I think the algorithm mentioned in the article is essentially a jump point search but not going into how the "select the optimal point for bypassing the obstacle" is done. Again this comes back to static/dymamic and how to determine that point. If using object outlines, can objects scale or rotate? You could probably do a reasonable version of jump point with convex hulls around objects and simple fast-out distance functions. A single object system would have difficulty dealing when multiple objects have overlapping bounds though. Two S shaped objects that are overlapping in their bounds might have a path possible through them but it is likely to not involve the optimal passing point of each individual shape alone is not on the path between them.
So the ideal is different for static/dynamic, dense/sparse, or memory availability, but all of those factors come back to how they influence how you decide what to look at and how to ignore irrelevant information.
Simon Peyton Jones on Getting from A to B fast route finding on slow computers [PWL London]
https://www.youtube.com/watch?v=L1XDdy-hOH8
It goes from 0 all the way up to A*, then beyond. I think the new stuff is based on https://www.cs.tau.ac.il/~haimk/papers/sp-wea.pdf but I'm not sure since the paper isn't explicitly named in the video.
I haven’t watched the video yet, but I really like the title: “on slow computers.”
I'll give you feedback again after I watch it.
The document you mentioned also seems to have a lot to learn from.
- Run A* on the optimistic pathfinding graph (no obstacles on the unknown cells).
- Follow the path found (if there's none, that's it)
- As you follow the path, sense the world and take note of new obstacles.
- If there's an action you can't make because of a new obstacle, re-run A.
There's ways of re-using the state from previous A
runs (as long as the goal is still the same) that becomes handy on complex maps where the path you wanted to follow is blocked deeply.
BTW, if you tie-break nodes on lower heuristic value (h-value), then you'll be more likely to search deeper paths, which makes optimizations like trying to follow a straight line kind of useless, but as always, run benchmarks before guessing.
Also, if you have a tight deadlines for the search algorithm, like having to make another tick on the game happen, there's some real-time variants of A* that have a bound for how many nodes A* can expand on each run.
I don't think you'll need real-time A* though, I remember that this approach using Dijkstra was fast enough for a project back in Uni on now 15yo hardware, so newer hardware using A* must be good enough ever for large graphs.
As far as I’ve researched, if there’s an assumption that there are no obstacles, the fastest way to select a straight path is Bresenham's Line Algorithm.
If I’m mistaken about this, please let me know!
In my project, since I don’t need to guarantee complete real-time processing, there isn’t an absolute necessity to find paths as quickly as possible.
However, since many entities need to find paths simultaneously, I’d like to keep the computations as minimal as possible.
It might be similar to what you mentioned about algorithms being fast enough on low-spec hardware.
In my case, I’m currently using an ultra-low-power Mini PC with an *N100 CPU* as a server.
This choice not only helps me study methods to optimize performance but also satisfies my curiosity about fully leveraging the advantages of *MSA (Microservice Architecture)*-based services.
> Among the outline information, select the optimal point for bypassing the obstacle. (This part is the core)
Core … and a bit too vague. I'd be curious what happens to it if you run into a concave object. Image running into the back curve of a crescent, or, since diagonals are not legal moves,
XXXXX
XXX X
X
X
S - - > X E
X
X
XXX X
XXXXX
Once you're inside such a shape, following the outline is not the optimal way around it (you'd waste time in the little alcoves at the top & bottom, and I could make those alcoves considerably more pathological, too). You'd want to head for one of the opening's corners.
Of course, the optimal path S → E avoids walking into that entirely.
Since it seems to be a game, though, the other consideration is "should the entity use optimal pathfinding?" Confounding an opponent with an odd shape could be just called "gameplay". (Different opponents might even have different levels of intelligence, and thus, different pathfinding. Some 2D games I have played have this exact mechanic.)
Yes, as you mentioned, the idea of "it doesn’t have to be the optimal path" aligns perfectly with my thinking as well.
In the case of the algorithm I’m currently working on, it wouldn’t enter those concave areas directly. Instead, it would "look" at the obstacle first, recognize that the path is blocked, and then proceed toward one of the corners at the bottom or top of the concave shape.
Afterward, it should be able to exit again using the same approach.
However, to make it behave more like a creature with vision in certain situations, it might be good to enhance the algorithm slightly so that it can preemptively recognize "Ah, this is a concave obstacle." That way, it could avoid inefficient behavior while still maintaining its "realistic" navigation style.
You're referring to the "Convex Hull". And if you are inside a shape, drawing edges to the visibile vertices of the shape (until you're on the boundary of the convex hull) will easily get you a path out, and, bonus, will eventually draw a perfect shortest path to the end.
Yes I have, I did something similar for a project naviagting in a space game.
This is still technically A* if you squint. The straight line is like using a eulicdean heuristic. The "optimal point" is just an abstracted way of navigating around obstacles.
The main thing you lose is that your path is no longer guaranteed to be optimal or guaranteed to be found if a solution exists. This was a problem I encountered, but I was trying to pathfind in a dynamic environment.
If your environment is static, you're better off just doing a pre-processing step where you divide your world into chunks of terrain. Maybe by using a flood fill algorithm and breaking off chunks when they reach the size of 100 tiles. Then you can maintain a graph that tells you if you can traverse from one chunk to another.
Your pathfinding over large distances would consist of an A* search on the graph of pre-computed chunks, and another A* search from your current chunk->next chunk.
I feel this is not needed if you tie-break nodes in Open to favour lower h-values. This leads to a node selection bias for deeper paths, which are more likely closer to the goal.
If you look at runs, this tie breaking makes A* behave like a greedy algorithm in the absence of obstacles and simply follow a straight path if there's one, and act sort of cleverly when there's a small detour.
Yes, as you mentioned, the fact that "it’s not guaranteed to always find a solution" is perfectly fine for me.
That’s because it feels more natural.
Moreover, since my goal isn’t to always find an answer in the shortest time, this approach aligns even better with my intentions.
In my case, I’d like to handle aspects like "trial and error" as part of the learning concept for entities such as rabbits or wolves.
And of course, I’m aiming for something that works well in *dynamic situations*, not just static ones.
Oh! This seems like something even AIs haven’t suggested before.
The fact that it attempts paths in real-time without preprocessing is what I like the most! I definitely need to research this further!
I’ll definitely take a look at it. Thank you!
https://en.wikipedia.org/wiki/Visibility_graph
On a visibility graph you can run any search algorithm. The key thing is this graph is sparse (the number of nodes is really limited by the amount and complexity of the obstacles, not the distances).
This algorithm follows almost perfectly your outlined steps. In particular it draws a line to the destination, and if it encounters an obstacle it "walks" paths (generates successors for A*) along the exterior of an obstacle until it can "See" (draw a straight line to) the destination or another obstacle. By using "obstacle free distance to destination" as a heuristic at each node, this will provide optimal paths.
You can, just from the wikipedia page, deduce a good response to the problem of "select the optimal point for bypassing the obstacle". (Maybe try the vertices of the convex hull / AABB - and don't worry, it may take a few iterations to truly wrap around an obstacle)
This will perform much, much better on sparse environments than a grid, because a grid has to iterate O(n) for n grid cells, while a visibility graph has to iterate O(n) for n obstacles (of a given complexity).
Very nice!
Ah, I did hear about the Visibility Graph through AI. I didn’t fully understand it due to my lack of knowledge. But with you bringing it up again, I think I should look into that as well. Thank you for your kind response.
Expire them by time, refresh them by range (can see as they move). Constantly replan and I bet you'd get something reasonable looking, but _only_ if you add an obstacle that represents only what the creature can see. If you add the whole obstacle (regardless of what it can see), it'll just do the optimal thing, which is fine but may not "look right". So clip visibility with obstacle, add that, and union it with all known obstacles, then replan. Keep it fast by doing a union "in place" with known obstacles so your obstacle list doesn't grow unbounded.
You can imagine it would walk toward the goal until it sees a wall, then it would go either left or right for a step, then back for two steps, then left for 4 steps, then back for 8 ... because the A* "frontier" keeps expanding so it keeps searching along that frontier.
And if you're lucky, you just discovered the optimal search variant of the "Drunkards walk" search problem.
Maybe take a look at HPA* (hierarchal A* - partition the map, pathfind on high-level map, then pathfind at lower granularity).
You can also encode into the hierarchy information about whether a rabbit exists in the chunk in the first place, to reduce the initial search for nearest-rabbit.
Factorio had a good blog post on it [1], and Rimworld too but it also enabled arbitrary-sized partitions. [2]
I'm kind of just guessing based on your basic description though; what's the full scenario in mind?
[0] http://www.gameaipro.com/GameAIPro2/GameAIPro2_Chapter14_JPS...
[1] https://factorio.com/blog/post/fff-317
[2] https://www.youtube.com/watch?v=RMBQn_sg7DA
What I'm aiming for, however, is a real-time scenario where such preprocessing is not strictly necessary. I want to implement something akin to how many creatures with human-like vision navigate using only partial information, just as humans do in real-life situations.
Currently, trees are appearing and disappearing dynamically. In the future, such situations are expected to occur more frequently, so I am aiming to create a lightweight solution that can handle these changes in real-time.
As you mentioned, for cases like rabbits, their location information is already preprocessed and divided by zones. Since this is a small task, it doesn't impose a significant burden on the system. This information is intended to be used when wolves are searching for rabbits.
Additionally, I have recently been considering processing even smaller zones than the current ones to handle the vision of wolves more effectively.
The algorithm in a maze would just use "intersections" as successors, rather than "Next next step down the hall".
This is, in fact, an optimal search algorithm for this problem, and scales much better than any grid search in this case.
It would essentially break down the flow of how humans navigate into small, incremental steps. Look with their eyes, make a judgment, move, and repeat the process again and again...
Of course, trial and error could also occur. This might actually make it feel more natural.
In the end, I feel like it might have to transition into the realm of inference, much like AI that mimics human reasoning.
The simplest and most memory hungry thing would be to have a tile map where every tile has a precalculated map on the direction to go to reach every other tile. If you used that as a basis for optimisation you can rapidly improve, for instance an easy first step is to not store any direction data for areas where the path in a straight line is optimal.
Optimize more and you can reduce the scale of data and compute required to calculate the map you can eventually calculate the map quicker and often for only the start and end points you care about. Follow those optimisations far enough and you end up with many of the algorithms used today.
If you have a lot of things moving in a dense area, often the optimal is to maintain a bitmap of 'Something is here' making presence detection trivial.
If you then do a 8 passes over that bitmap for each direction you can go (for square tiles) you can create 8 further maps with a count recording when that pass last saw an obstruction. That lets you implement a very fast jump point search where instead of scanning tile by tile you can jump to the next obstruction in order to either find the way around or find the dead end.
I think the algorithm mentioned in the article is essentially a jump point search but not going into how the "select the optimal point for bypassing the obstacle" is done. Again this comes back to static/dymamic and how to determine that point. If using object outlines, can objects scale or rotate? You could probably do a reasonable version of jump point with convex hulls around objects and simple fast-out distance functions. A single object system would have difficulty dealing when multiple objects have overlapping bounds though. Two S shaped objects that are overlapping in their bounds might have a path possible through them but it is likely to not involve the optimal passing point of each individual shape alone is not on the path between them.
So the ideal is different for static/dynamic, dense/sparse, or memory availability, but all of those factors come back to how they influence how you decide what to look at and how to ignore irrelevant information.
- Follow the path found (if there's none, that's it)
- As you follow the path, sense the world and take note of new obstacles.
- If there's an action you can't make because of a new obstacle, re-run A.
There's ways of re-using the state from previous A
runs (as long as the goal is still the same) that becomes handy on complex maps where the path you wanted to follow is blocked deeply.BTW, if you tie-break nodes on lower heuristic value (h-value), then you'll be more likely to search deeper paths, which makes optimizations like trying to follow a straight line kind of useless, but as always, run benchmarks before guessing.
Also, if you have a tight deadlines for the search algorithm, like having to make another tick on the game happen, there's some real-time variants of A* that have a bound for how many nodes A* can expand on each run. I don't think you'll need real-time A* though, I remember that this approach using Dijkstra was fast enough for a project back in Uni on now 15yo hardware, so newer hardware using A* must be good enough ever for large graphs.
In my project, since I don’t need to guarantee complete real-time processing, there isn’t an absolute necessity to find paths as quickly as possible. However, since many entities need to find paths simultaneously, I’d like to keep the computations as minimal as possible.
It might be similar to what you mentioned about algorithms being fast enough on low-spec hardware. In my case, I’m currently using an ultra-low-power Mini PC with an *N100 CPU* as a server. This choice not only helps me study methods to optimize performance but also satisfies my curiosity about fully leveraging the advantages of *MSA (Microservice Architecture)*-based services.
Core … and a bit too vague. I'd be curious what happens to it if you run into a concave object. Image running into the back curve of a crescent, or, since diagonals are not legal moves,
Once you're inside such a shape, following the outline is not the optimal way around it (you'd waste time in the little alcoves at the top & bottom, and I could make those alcoves considerably more pathological, too). You'd want to head for one of the opening's corners.Of course, the optimal path S → E avoids walking into that entirely.
Since it seems to be a game, though, the other consideration is "should the entity use optimal pathfinding?" Confounding an opponent with an odd shape could be just called "gameplay". (Different opponents might even have different levels of intelligence, and thus, different pathfinding. Some 2D games I have played have this exact mechanic.)
Yes, as you mentioned, the idea of "it doesn’t have to be the optimal path" aligns perfectly with my thinking as well.
In the case of the algorithm I’m currently working on, it wouldn’t enter those concave areas directly. Instead, it would "look" at the obstacle first, recognize that the path is blocked, and then proceed toward one of the corners at the bottom or top of the concave shape. Afterward, it should be able to exit again using the same approach.
However, to make it behave more like a creature with vision in certain situations, it might be good to enhance the algorithm slightly so that it can preemptively recognize "Ah, this is a concave obstacle." That way, it could avoid inefficient behavior while still maintaining its "realistic" navigation style.
Anyway, the details are not enough for me to fully grasp the algo described.
See: https://news.ycombinator.com/item?id=42608107#42626311
This is still technically A* if you squint. The straight line is like using a eulicdean heuristic. The "optimal point" is just an abstracted way of navigating around obstacles.
The main thing you lose is that your path is no longer guaranteed to be optimal or guaranteed to be found if a solution exists. This was a problem I encountered, but I was trying to pathfind in a dynamic environment.
If your environment is static, you're better off just doing a pre-processing step where you divide your world into chunks of terrain. Maybe by using a flood fill algorithm and breaking off chunks when they reach the size of 100 tiles. Then you can maintain a graph that tells you if you can traverse from one chunk to another.
Your pathfinding over large distances would consist of an A* search on the graph of pre-computed chunks, and another A* search from your current chunk->next chunk.
If you look at runs, this tie breaking makes A* behave like a greedy algorithm in the absence of obstacles and simply follow a straight path if there's one, and act sort of cleverly when there's a small detour.
Yes, as you mentioned, the fact that "it’s not guaranteed to always find a solution" is perfectly fine for me. That’s because it feels more natural.
Moreover, since my goal isn’t to always find an answer in the shortest time, this approach aligns even better with my intentions. In my case, I’d like to handle aspects like "trial and error" as part of the learning concept for entities such as rabbits or wolves.
And of course, I’m aiming for something that works well in *dynamic situations*, not just static ones.