Might A Robot Utilize Google ARCore?
Machine vision is a big field, because there are a lot of useful things we can do when a computer understands what it sees. In narrow machine-friendly niches it has become commonplace, for example the UPC bar code on everyday merchandise is something created for machines to read, and a bar code reader is a very simplified and specific niche of machine vision.
But that is a long, long way from a robot understanding its environment through cameras, with many sub sections along the path which are entire topics in their own right. Again we have successes in narrow machine-friendly domains such as a factory floor set up for automation. Outside of environments tailored for machines, it gets progressively harder. Roomba and similar robot home vacuums like Neato could wander through a human home, but their success depends on a neat and tidy spacious home. As a home becomes more cluttered, success rate of robot vacuums decline.
But they're still using specialized sensors and not a camera with vision comparable to human sight. Computers have no problems chugging through a 2D array of pixel data, but extracting useful information is hard. The recent breakthrough in deep learning algorithms opened up more frontiers. The typical example is a classifier, and it's one of the demos that shipped with Google AIY Vision kit. (Though not the default, which was the "Joy Detector.") With a classifier the computer can say "that's a cat" which is a useful step toward something a robot needs, which is more like "there's a house pet in my path and I need to maneuver around it, and I also need to be aware it might get up and move." (This is a very advanced level of thinking for a robot...)
The skill to pick out relevant physical structure from camera image is useful for robots, but not exclusively to robots. Both Google and Apple are building augmented reality (AR) features into phones and tablets. Underlying that feature is some level of ability to determine structure from image, in order to overlay an AR object over the real world. Maybe that capability can be used for a robot? Time for some research.