我想用OSG实现点选目标物体,同时得到选中目标的位置,看书上说可以用pick函数来实现这一功能,代码如下:
class CPickHandler :public osgGA::GUIEventHandler { public: CPickHandler(osgViewer::Viewer* viewer) :mViewer(viewer){} virtual bool handle(const osgGA::GUIEventAdapter& ea, osgGA::GUIActionAdapter& aa) { switch (ea.getEventType()) { case(osgGA::GUIEventAdapter::PUSH): if (ea.getButton() == 1) { Pick(ea.getX(), ea.getY()); } return true; } return false; } protected: void Pick(float x, float y) { osgUtil::LineSegmentIntersector::Intersections intersections; if (mViewer->computeIntersections(x, y, intersections)) { for (osgUtil::LineSegmentIntersector::Intersections::iterator hitr = intersections.begin(); hitr != intersections.end(); ++hitr) { if (!hitr->nodePath.empty() && !(hitr->nodePath.back()->getName().empty())) { const osg::NodePath& np = hitr->nodePath; for (int i = np.size() - 1; i >= 0; --i) { osgFX::Scribe* sc = dynamic_cast<osgFX::Scribe*>(np[i]); if (sc != NULL) { if (sc->getNodeMask() != 0) sc->setNodeMask(0); } } } } } } osgViewer::Viewer* mViewer; }; 这段代码相信大家都很熟悉,其中最重要的是computeIntersections函数,这个函数可以确定当前点击位置是否跟目标物体有交集,从而确定是否点击到了物体。因为我做的是安卓上面的OSG,不知道是不是因为这个原因, computeIntersections这个函数总是false。从源码中找到这个函数,代码如下: bool View::computeIntersections(float x,float y, osgUtil::LineSegmentIntersector::Intersections& intersections,osg::Node::NodeMask traversalMask) { if (!_camera.valid()) return false; float local_x, local_y = 0.0; const osg::Camera* camera = getCameraContainingPosition(x, y, local_x, local_y); if (!camera) camera = _camera.get(); osgUtil::LineSegmentIntersector::CoordinateFrame cf = camera->getViewport() ? osgUtil::Intersector::WINDOW : osgUtil::Intersector::PROJECTION; osgUtil::LineSegmentIntersector* picker = new osgUtil::LineSegmentIntersector(cf, local_x, local_y); osgUtil::IntersectionVisitor iv(picker); iv.setTraversalMask(traversalMask); const_cast<osg::Camera*>(camera)->accept(iv); if (picker->containsIntersections()) { intersections = picker->getIntersections(); return true; } else { intersections.clear(); return false; } return false; }
经过调试,发现是getcameracontainingposition这个函数出现了问题,array大神说这个函数是用来判断给定的xy坐标是否能够被当前viewer的某个摄像机包含,并返回这个摄像机的指针,以及(x,y)在这个摄像机中的局部坐标(local_x,local_y)。下面我们来看一下这个函数的具体代码,因为最近新换了电脑,之前的源码不好找,所以从网上随便找了一份,如下所示:
const osg::Camera* View::getCameraContainingPosition(float x, float y, float& local_x, float& local_y) const { const osgGA::GUIEventAdapter* eventState = getEventQueue()->getCurrentEventState(); bool view_invert_y = eventState->getMouseYOrientation()==osgGA::GUIEventAdapter::Y_INCREASING_DOWNWARDS; osg::notify(osg::INFO)<<"View::getCameraContainingPosition("<<x<<","<<y<<",..,..) view_invert_y="<<view_invert_y<<std::endl; double epsilon = 0.5; if (_camera->getGraphicsContext() && _camera->getViewport())//如果当前位置是主摄像机所看到的 { const osg::Viewport* viewport = _camera->getViewport(); double new_x = static_cast<double>(_camera->getGraphicsContext()->getTraits()->width) * (x - eventState->getXmin())/(eventState->getXmax()-eventState->getXmin()); double new_y = view_invert_y ? static_cast<double>(_camera->getGraphicsContext()->getTraits()->height) * (1.0 - (y- eventState->getYmin()))/(eventState->getYmax()-eventState->getYmin()) : static_cast<double>(_camera->getGraphicsContext()->getTraits()->height) * (y - eventState->getYmin())/(eventState->getYmax()-eventState->getXmin()); //此时输入的x,y的范围应该是(-1,1),经过上述变化,把坐标变换到屏幕坐标,比如x最大为1920,y最大为1080。先把(-1,1)变为(0,1),然后乘以宽跟高。 if (viewport && new_x >= (viewport->x()-epsilon) && new_y >= (viewport->y()-epsilon) && new_x < (viewport->x()+viewport->width()-1.0+epsilon) && new_y <= (viewport->y()+viewport->height()-1.0+epsilon) )//判断点击的目标是否在视口内 { local_x = new_x; local_y = new_y; //输出为屏幕坐标 return _camera.get(); } } osg::Matrix masterCameraVPW = getCamera()->getViewMatrix() * getCamera()->getProjectionMatrix(); x = (x - eventState->getXmin()) * 2.0 / (eventState->getXmax()-eventState->getXmin()) - 1.0; y = (y - eventState->getYmin())* 2.0 / (eventState->getYmax()-eventState->getYmin()) - 1.0;//把x,y从(0.1)变到(-1,1),以便接下来的操作 if (view_invert_y) y = - y;//如果y需要翻转 for(unsigned i=0; i<getNumSlaves(); ++i) { const Slave& slave = getSlave(i);//如果当前位置为从相机看到的,也是进行相关的操作 if (slave._camera.valid()) { const osg::Camera* camera = slave._camera.get(); const osg::Viewport* viewport = camera ? camera->getViewport() : 0; osg::Matrix localCameraVPW = camera->getViewMatrix() * camera->getProjectionMatrix(); if (viewport) localCameraVPW *= viewport->computeWindowMatrix(); osg::Matrix matrix( osg::Matrix::inverse(masterCameraVPW) * localCameraVPW ); osg::Vec3d new_coord = osg::Vec3d(x,y,0.0) * matrix;//如果是从相机,则需要先把主相机坐标逆转换为世界坐标,再在世界坐标的基础上,乘以相关矩阵,变成从摄像机的坐标。 // osg::notify(osg::NOTICE)<<" new_coord "<<new_coord<<std::endl;; if (viewport && new_coord.x() >= (viewport->x()-epsilon) && new_coord.y() >= (viewport->y()-epsilon) && new_coord.x() < (viewport->x()+viewport->width()-1.0+epsilon) && new_coord.y() <= (viewport->y()+viewport->height()-1.0+epsilon) ) { // osg::notify(osg::NOTICE)<<" in viewport "<<std::endl;; local_x = new_coord.x();//输出还是屏幕坐标 local_y = new_coord.y(); return camera; } else { // osg::notify(osg::NOTICE)<<" not in viewport "<<viewport->x()<<" "<<(viewport->x()+viewport->width())<<std::endl;; } } } local_x = x; local_y = y; return 0; }看到这份代码的我是崩溃的!为什么呢!因为!这份源码跟我当时用的不一样,我当时用的那份会在程序开始的时候获取一下gw,const osgViewer::GraphicsWindow*gw =dynamic_cast<const osgViewer::GraphicsWindow*>(eventState->getGraphicsContext());
然后判断gw是否获取成功,如果不成功,则会报错。我之前一直出错就是因为这个原因。
仔细分析一下getcameracontainingposition这个函数具体做了什么,其实很简单,主要是坐标的转换。因为当前场景可能存在不止一个摄像机,所以,要判断一下当前是主摄像机还是从摄像机。这个函数的输入x,y应该是范围(-1,1)的坐标,而输出应该是屏幕坐标。所以理解了这个以后,可以直接从系统中获取屏幕坐标,这样就不需要经过getcameracontainingposition这个函数了。
在computeIntersections这个函数中,getcameracontainingposition之后会 选择cf,cf是来选择输入的模式,可以是WINDOW,也可以PROJECTION,前者是屏幕坐标,后者是世界坐标。
所以最后我的实现过程的代码如下:
osgUtil::LineSegmentIntersector::Intersections intersections;//求交类 bool view_invert_y = ea.getMouseYOrientation() == osgGA::GUIEventAdapter::Y_INCREASING_DOWNWARDS; osg::ref_ptr<osgUtil::LineSegmentIntersector> picker = new osgUtil::LineSegmentIntersector( osgUtil::Intersector::WINDOW, ea.getX(), ea.getY());//Window参数的意思为输入是屏幕坐标,即ea.getX(), ea.getY()是屏幕坐标(1920,852) osgUtil::IntersectionVisitor iv(picker.get()); iv.setTraversalMask(0xffffffff); camera->accept(iv); if (picker->containsIntersections()) { intersections = picker->getIntersections();//intersections里面存放了所有触碰到的节点 } else { intersections.clear(); } if (intersections.size() != 0) {//如果点击到了物体 osgUtil::LineSegmentIntersector::Intersections::iterator hitr = intersections.begin();//因为我们只需要得到我们点击到离我们最近的那个物体,所以只得到begin就可以 eye = (*hitr).drawable->getBound().center();//得到点击到的物体的中心坐标 } 最后得到的eye就是点击到的物体的中心。