Aug 092010


Although I previously said that matching the current angle of travel to the a nearby road with a similar angle of travel in order to match the current location to road worked just fine, that’s actually incorrect. What happens when the vehicle turns a corner is that the angle of travel does not match either of the roads it’s currently on. For example, if approaching an intersection which looks like a “T” and runs North/South and East/West, then while turning the vehicle is traveling somewhere between North/South and East/West. So the angle of travel doesn’t match either of the possible roads to which one would hope to match the location.

Match GPS Location to Road Data (Attempt 2)

Perhaps one thing that went horribly wrong with the previous attempt was putting more weight (importance) on the current angle of travel instead of the distance to a road segment. Assuming the GPS data isn’t off by too much, then the distance to the nearby road segments should be the most important. i.e. initially there should be a choice between 3 or less road segments based on distance.

Ideally, a path of travel would be saved and matched to nearby road segments, so that if the vehicle travels North then turns West, we could match to road segments which travel North and then West. Unfortunately I don’t have any ideas on how to implement this. It seems difficult because the GPS data gives you a lot more points to work with than the map data, to the point where the GPS data seems continuous compared to the discrete map data.

So next I will snap the GPS’ location to the nearest road segment and test the results. From there I will try to come up with a way to solve issue mentioned in an earlier post where the “snapped-to” road’s angle of travel does not match the current angle of travel at all. Currently I have two ideas: 1) Don’t allow the road to change until the angle more closely matches the new road and 2) detect when near an intersection and (similar to #1) allow the road to change when the angle of movement matches the new road more than the current road. For 1 and 2, if the road looks like a “T” then 45 degrees would be the threshold for changing the road that’s currently “snapped-to.”

Two Weeks Ago

Two weeks ago I worked out various bugs in the methods which remove non-visible lines and do rotations and transformations. The fact that using a “Graphics2D” instance applies things later instead of immediately and the fact that PerspectiveTransform’s can’t be done using a Graphics2D instance caused me to have to rewrite much of the code so that all of the rotations, transformations, etc. could be applied in the correct order.

Last Week

Last week I augmented the simulation to not only include a GPS location with map data, but to also include the image captured when the GPS data was captured. So now I can test the various algorithms without driving around. From this I noticed that the “snap-to” algorithm works about 10% of the time and needs to improved.

 Posted by at 7:34 pm
Jul 122010

Noisy Map Data

The data actually obtained from the GPS device while driving does not match the freely available Tiger map data. This can’t be solved accurately by assuming the closest road point is the current position. Imagine a road like this.


The above lines are meant to represent two intersecting roads – one going directly North/South and the other going directly East/West. On the road pictured above, if the the map or GPS data is off slightly such that the GPS shows the current position as slightly to the right of the actual road then there is an issue. If traveling North from the bottom up and the current position is to the right of the road by some amount, say by two underscores (“__”) as shown in the image, then there will be a point during traveling upward that the closest road point will no longer be the North/South road even though the driver is still traveling upwards. Furthermore the “current location” marker would jump from the North/South road to the East/West road when the occurred, which wouldn’t look very appealing, besides being just wrong.

To clean up the noisy data effectively the angle of travel should be taken into account. The current and previous angles of travel can be obtained by saving the previous GPS locations of the vehicle. This will give you a list of angles. The map data itself is just a set of points, so the angle of each road section can be computed as well. These two sets of “angle” data can be brought together. The “current location” should snap to the nearest road section with an angle that is similar to the current angle of travel.

Issues Etc With Matching Angles

Although I haven’t tried this yet I imagine it will work quite well. When on a road with a long curve, the algorithm should choose the section of road with the closest matching angle, since there will be many road sections to choose from. When in the situation of a road section with a non-matching angle being the closest road, the algorithm will disregard the incorrect road section since the angle doesn’t match.

The only remaining issue I can think of currently is the following. If 1) the GPS location is off, say to the right by some amount and 2) then the driver turns right onto a road, then snapping to the nearest location on the road will result in a jump from where the two roads meet to a bit to the right on the new road. This could be avoided by detecting when a change of road happens and assuring somehow that the current position as displayed on the map is adjusted to be very near the intersection, but I imagine this won’t be necessary for my limited application for my thesis.

Note: I implemented this this week, only considering the current angle of travel, and it seems to work well.

Floating Road with Edge Detection

I realized this week that it is possible to improve my algorithm to get rid of the floating road pieces. Currently it marks road everywhere, then goes back through the image and “searches up” to look for spaces, and then goes through the image horizontally to get rid of road pieces which are too thin. It could simply assume that road pieces below a certain level are “ok” and begin searching horizontally across the bottom of the image. It would then only include further road pieces if they are touching the road pieces which already exist in the image. This algorithm may or may not be slower since it has to “grow” the road outward from certain pieces, but it may be faster because it only has to process the road points once instead of three times.

Additionally, in some cases it’s not be necessary to search every single pixel when searching for the road, since when examining “one pixel” it really looks at a square matrix of pixels for the edge density (i.e. “the road” has a certain width). Once the algorithm has found one road pixel it could then search the next furthest pixel which would result in a new section of road that touches the current section. Ordering the search in the manner will improve speed because it wouldn’t be necessary to search any of the pixels between two touching road sections, since the road sections have a certain width. The remaining issue for the new algorithm is when the next searched pixel is not recognized as a section of road.

Assume the road width (aka matrix width) is “roadWidth” pixels. The algorithm should begin the search at x where x is (roadWidth / 2). If this is recognized as road then the next pixel to search would be (currentX + roadWidth); call this pixel newX. If this second pixel (newX) is not recognized as road then we still must search all of the pixels between currentX and newX, beginning at newX and moving toward currentX, to try to expand the area recognized as road. … This would be complicated to implement efficiently.