If you go to their cloud service page (https://cloud.internalpositioning.com/), it's trivially easy to get live location data for strangers by guessing names.
Yikes, you are right. Just typed in "Jones" and it loaded up a dashboard with a bunch of data. Fortunately it appears to be someone playing around with the dashboard. If you zoom out on the map, all the GPS markers make a smiley face :)
Interesting project and seems they've put a lot of work into it. I wish, however, that projects like this wouldn't use words to describe their performance ("High-precision") that are only meaningful given a user's intended application(s). For example, whether using a bluetooth RSSI technique for localization is considered "high-precision" depends completely on the user's intended application and the other components present within the system, including hardware and the environment (both of which could be uncontrollable).
Incidentally, the name of the project does a good job at capturing exactly what it is, "Framework for Internal Navigation and Discovery".
I think we’re going to see a lot more innovation in indoor “gps” when we get access to apples U1 chip, which uses ultra wide bands to get positioning down to 10cm.
Estimote have an indoor location SDK that uses bluetooth beacons which is remarkably accurate (like ~15-20cm position accuracy in some of our testing).
Interestingly, it's room mapping tool is iOS only. Which seems odd given the FIND approach of being Android-only because of OS restrictions on 802* access?
Estimote is likely a part of Apple's iBeacon program and has a license to build the technology into their beacons. FIND likely isn't a part of this program, and therefore can't support it. If Estimote is a part of Apple's iBeacon program, they may be legally unable to implement an Android solution for room mapping functionality, lest they remove themselves from the iBeacon program.
The U1 chip is definitely really cool and a promising step in improving the accuracy of indoor localization. Apple giving devs access to it will likely drive some innovation, but there still have to be other devices/beacons/motes whatever that can communicate with the U1 chip to make it usable outside of the Apple ecosystem. Fortunately, as mentioned in the 9to5mac article, the U1 adheres to the UWB standard which should help with interoperability should Apple choose to allow for that on its devices.
I say supposedly since it is quite common for companies to make their "compatible" version have proprietary features so as to lock in developers. There is also a $39 eval kit:
Note that high quality (10 cm in this case) indoor location tracking means new privacy problems. Your phone can now know which room it is in (especially if you supply a home/office map), what part of that room it is in, what table it is placed on, what similar devices are close by and how they are arranged, and has more info to figure out who is carrying it, etc. Similarly other devices that participate in this location system can potentially know the precise location of your phone and other devices. Combined with GPS this can give quite a complete picture of your entire life, where you are at all times, who you interact with, what institutions you visit and participate in, what products you use and when you use them, what you pay attention to, and more. It makes some magical things possible too, bring two devices together and they interact, sit in a chair and the computer boots. Four DW1000 chips placed at the four corners of a typical house or apartment are needed for indoor location (with a phone and a tile on your keys you can only get distance) so perhaps Apple will introduce an infrastructure kit to set up mapping your home. Something similar for larger buildings/offices/industrial settings also seems likely (or available from a third party).
I've never investigated the data transfer capability of the Decawave system, anyone understand what it's limitations are? Would a video link built with it drop out if someone walked into the line of sight path? 6.8 Mbps probably isn't enough to send 4K video but it might be OK for one 1080p stream or a few SD resolution streams depending on the codec.
That $300 kit is the second version of the dev kit.
Yes, line of sight is very important for this. You want all of the anchors (the fixed, non-moving devices) up as high as you can get them, and you need their positions to be known with precision, or the location of the tags will be off by a large amount.
There are also limitations with UWB that must be understood very well and designed around in order for a successful deployment of any size.
There is no way I can fit everything I learned about this system/technology into an HN comment.
I recommend getting the $300 kit and experimenting extensively to find out if the technology is suitable for your environment.
Damn, I was really hoping this was a turnkey solution for SLAM (Simultaneous location and mapping) as I really want to make a silly little rover that knows it's location and can drive around my apartment.
I can't find it now (so I'm going to describe it and hope HN remembers) but a thing went by here a few months ago, there was a drone flying down a staircase with a camera on it and it was doing some sort of SLAM that way... mapping the staircase while tracking drone's location.
- - - -
edit: Found it!
XIVO: X Inertial-aided Visual Odometry and Sparse Mapping
For a true turn-key solution you probably need the entire stack from sensors up to software. Intel's RealSense cameras (the T265 specifically) is probably the most reasonably priced option for a hobbyist with reasonable APIs. Visual and visual-inertial SLAM are not at a point where open-source packages are robust and easy to use with an arbitrary sensor stack and environment.
If you only have 2D motion and can spend more money or mess around with unsupported hardware (salvaged Neato vacuum) a laser rangefinder is also a decent option.
Hardly turn-key, I'm afraid, but I'd look into UcoSLAM ( http://www.uco.es/investiga/grupos/ava/node/62 ). As I understand it, if you have a robot with a camera that can see special markers, it can work out its location.
If you have an extra smartphone, you can use some of the AR SDKs out there. ARKit has the ability to relocalize itself to a stored map ( e.g. your apartment )
The 6D.ai SDK can even do meshing for obstacle avoidance.
Can you describe what kinds of accuracy are possible in various situations? What data sources are currently supported other than WiFi and Bluetooth (magnetic fields are mentioned)? Is mapping of the WiFi/BT/Magnetic field required? (And how often is remapping typically required? If I move my WiFi emitting laptop to a new location does that require remapping the entire WiFi space?)
> Can you describe what kinds of accuracy are possible in various situations?
Accuracy depends highly on the environment. If you're talking about houses/apartments its generally room level or subroom level (~10 sq ft). It depends on the number of WiFi routers in the vicinity, but this number if always growing so accuracy can get pretty good in some areas. Of course it's accuracy goes way down if you are in a remote area with no bluetooth or wifi points in the vicinity.
> What data sources are currently supported other than WiFi and Bluetooth (magnetic fields are mentioned)?
The FIND system accepts any data source that can be quantified. In the API you just label your data and the system will try classifying with it. For my purposes I developed an app and a CLI tool that can geolocate phones/computers and these use bluetooth and wifi.
> Is mapping of the WiFi/BT/Magnetic field required?
You don't need to map the fields, but you do need to go through a learning phase to help the system learn what kind of fingerprint each area has.
> And how often is remapping typically required?
It depends on your location. If you are in a rural area with only one WiFi router and it gets moved, then you need to remap. Surprisingly I've used this in airports (which have tons of ad-hoc wifi networks) and the accuracy stays resilient to the adhoc networks coming and going.
> If I move my WiFi emitting laptop to a new location does that require remapping the entire WiFi space?
It depends again, if your laptop is the only source of WiFi then yes you will need to remap. However, if your laptop is one of 10 or 20 sources, then it probably won't matter for the accuracy. In the system there are controls for this (visualization of patterns, test validations, etc.) so that you can monitor how this affects your system.
Thanks. I moved DO droplets and deleted the DNS for those, but I guess they hung around long enough to get muddied. I re-enabled the DNS so those domains should point back to my domain.
It'd be nice if this had iOS support. My mother lacks directional hearing, and always struggles to find her iPhone despite the ability to make it ping with her Apple watch. An app that just says what area of the house it is in would help a ton.
If her spatial hearing isn't completely lost changing the ringtone to pulsing white noise (one second on, one second off) may help, as it's generally harder to locate higher pitch tones (like most ringtones).
I’m confused about the iOS restriction, there are a number of libraries/packages (all commercial, except for google maps) that work on iOS for internal gps - is there maybe a different approach that the other platforms are using?
On Android you can get near raw GPS data (phase of signals, Doppler shift, pseudoranges, etc.)
That can give a lot of extra information - for example, standing one side of a room might mean a metal beam is reflecting signals from one GPS satellite giving a longer apparent pseudorange. That info can be used to identify that you must be in that location.