Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporting 3D points visibility information #53

Open
emphos1 opened this issue Mar 28, 2020 · 3 comments
Open

Exporting 3D points visibility information #53

emphos1 opened this issue Mar 28, 2020 · 3 comments

Comments

@emphos1
Copy link

emphos1 commented Mar 28, 2020

Hello and thanks for publishing such an amazing code.

I would like to use the 3D points generated by LDSO in a mesh reconstruction research. For that purpose, I would like to know all 3D points which are visible from a specific KF. I know LDSO is a direct method, so is it possible for LDSO to export that visibility information? If yes, where, in the code, can I look for such information?

Thanks @NikolausDemmel

@rui2016
Copy link
Collaborator

rui2016 commented Mar 28, 2020

Hi, that's should be possible and not so hard to do. You can write a function to save out all the points hosted in each keyframe right before points and frames are marginalized, e.g., in FullSystem::makeKeyFrame. One hint that might be useful, when going through all the points hosted in a keyframe, better only write out the points that are marginalized, i.e. frame->pointHessiansMarginalized.

@emphos1
Copy link
Author

emphos1 commented Mar 28, 2020

Thank you @rui2016 for the prompt reply.

Let's assume we are running LDSO and having 5 KFs and 100 3D-points. Then LDSO decided to add a new KF; this KF appends 10 points into the point cloud (as its contribution); so in total, we have 6 KFs and 110 points. The last KF added 10 points and also can see 15 points from the previous 100 points. Does the method you mentioned exports all 10+15 points or only the new 10 points? I would like to get all 10+15 points.

I'm asking because I have seen this text in /doc/note_on_DSO.pdf:
"In the feature point method, we can record in which frames a map point is seen, and even find the image descriptors in each frame; but in DSO, we will try to project each point into all active frames (unless they are out-of-boundary or occluded), calculate its residual in each frame, and do not care about the one-one correspondence between points."

@NikolausDemmel
Copy link
Contributor

Here is some code I use to export the final result of an LDSO run to get all the keyframe and point information. I does include for every map point the list of observing target frames, so together with the poses and the intrinsics, you can reproject them into the target frames to get all hosted and observed points for each frame.

This probably extracts more info than you need, but it might give you a hint on where to get which information.

struct PhotoBAInitializationLDSO {

    /// We export the original (distorted) and the undistored camera parameters
    struct Intrinsics {
        std::string type;   //!< camera model type
        VecX params;        //!< camera model parameters
        int width;          //!< image width
        int height;         //!< image height
    };

    /// Relative pose constraint; Either from marginalization window or from loop closure
    struct RelativePose {
        EIGEN_MAKE_ALIGNED_OPERATOR_NEW

        //  `T_c_r` is the Sim3 transfromation from reference frame to
        //  current frame (for loop closure edge) or from other frame
        //  to current frame (for non loop closure edge)
        Sim3 T_c_r;                      //!< T_current_reference

        //  `info` is identity for non-loop closure edges and for loop-closure edges it is the information matrix
        //  corresponding to the uncertainty of T_c_r as estimated by loop-closure detection.
        //  All the pertubation in this project are left perturbation (e.g. here for T_c_r)
        Mat77 info = Mat77::Identity();  //!< information matrix, inverse of covariance, default is identity
        bool is_loop = false;
    };

    /// 2d point hosted in a frame
    struct Point {
        enum Status {
            Immature = 0,     //!< not yet activated
            ImmatureOutlier,  //!< diverged during depth initialization
            Active,           //!< active or marginalized
            ActiveOutlier     //!< previously active, now outlier or oob
        };

        Status status = Immature;
        bool is_corner = false;            //!< is corner feature with descriptor or not
        Vec2f pos_undist = Vec2f::Zero();  //!< (fixed) pixel coordinates of point (undistorted coordinates)

        // the following is only set for corner feature points (is_corner == true)
        int32_t feature_index = -1;        //!< index into list of descriptors / angles

        // the following is only set for active (non-outlier) points
        Vec2f pos_dist = Vec2f::Zero();    //!< (fixed) pixel coordinates of point (distorted / original coordinates)
        double idist = -1;                 //!< inverse distance (original scale)
        double idist_opti = -1;            //!< inverse distance (optimized scale)
        double idepth = -1;                //!< inverse depth (original scale)
        double idepth_opti = -1;           //!< inverse depth (optimized scale)
        std::vector<int32_t> target_frames;    //!< frame ids of target (observing) frames

        EIGEN_MAKE_ALIGNED_OPERATOR_NEW
    };

    /// camera frame with list of points
    struct Frame {
        int32_t id = -1;         //!< frame id
        int32_t kfid = -1;       //!< keyframe id (or -1 for non-keyframes)
        double timestamp = 0;    //!< frame timestamp
        Vec2 ab = Vec2::Zero();  //!< affine brightness transfer
        SE3 pose_T_c_w;          //!< odometry camera pose
        SE3 pose_opti_T_c_w;     //!< optimized camera pose (incl. loop closure)
        double scale_drift_w_c = 1;   //!< estimated scale (from PGO) compared to newest keyframe

        //! kfid and each relative pose from current frame to other frames in current window
        std::map<
            int32_t, RelativePose, std::less<int32_t>,
            Eigen::aligned_allocator<std::pair<const int32_t, RelativePose>>>
            relative_poses;

        //! kfids in active window when this frame was added
        //! this is used for loop closure constraint computation (points from local map are projected into the new KF)
        std::vector<int32_t> window_frames;

        //! All points from other kfs in the active window (when kf was added) projected into current window.
        //! Does not include our own hosten points.
        //! Used for loop closure correction.
        //! Format is (u, v, idepth)
        VecVec3f center_pts;

        // corner feature information
        std::vector<std::bitset<256>> descriptors;   //!< list of feature descriptors
        std::vector<float> angles;                   //!< list of feature angles
        std::vector<int32_t> feature_point_indices;  //!< list of indices into list of all points for feature points

        std::vector<Point, Eigen::aligned_allocator<Point>> points;  //!< points hosted in this frame

        EIGEN_MAKE_ALIGNED_OPERATOR_NEW
    };

    struct AllIntrinsics {
        Intrinsics original;    //!< original intrinsics for distorted images
        Intrinsics undistored;  //!< original intrinsics for undistored images
        Intrinsics optimized;   //!< optimized intrinsics for undistored images
    };

    AllIntrinsics intrinsics;

    std::vector<Frame, Eigen::aligned_allocator<Frame>> cameras;

    void clear();

    void create_from_map(Map& map, const Undistort& undist, const Camera& Hcalib);

    bool save(const std::string& filename) const;
};

void PhotoBAInitializationLDSO::create_from_map(Map &map, const Undistort& undist, const Camera& Hcalib)
{
    clear();

    // intrinsics
    intrinsics.original.type = undist.getCameraModelType();
    intrinsics.original.params = undist.getOriginalParameter();
    intrinsics.original.width = undist.getOriginalSize()[0];
    intrinsics.original.height = undist.getOriginalSize()[1];

    intrinsics.original.type = "Pinhole";
    intrinsics.original.params.resize(4);
    intrinsics.original.params << Hcalib.fx, Hcalib.fy, Hcalib.cx, Hcalib.cy;
    intrinsics.original.width = wG[0];
    intrinsics.original.height = hG[0];

    intrinsics.optimized.type = "Pinhole";
    intrinsics.optimized.params.resize(4);
    intrinsics.optimized.params << Hcalib.mpCH->fxl(), Hcalib.mpCH->fyl(), Hcalib.mpCH->cxl(), Hcalib.mpCH->cyl();
    intrinsics.optimized.width = wG[0];
    intrinsics.optimized.height = hG[0];

    // mapping kfids to ids
    std::map<int32_t, int32_t> id_from_kfid;
    for (const auto& map_frame : map.GetAllKFs()) {
        id_from_kfid[map_frame->kfId] = map_frame->id;
    }

    // create all frames
    for (const auto& map_frame : map.GetAllKFs()) {
        Frame frame;

        frame.id = map_frame->id;
        frame.kfid = map_frame->kfId;
        frame.timestamp = map_frame->timeStamp;
        frame.ab = map_frame->aff_g2l.vec();

        frame.pose_T_c_w = map_frame->getPose();
        // drop scale from sim3 pose:
        const Sophus::Sim3d pose_opti_sim3_T_c_w = map_frame->getPoseOpti();
        const double scale_w_c = 1.0 / pose_opti_sim3_T_c_w.scale();
        frame.pose_opti_T_c_w = {pose_opti_sim3_T_c_w.quaternion().normalized(), scale_w_c * pose_opti_sim3_T_c_w.translation()};
        frame.scale_drift_w_c = scale_w_c;

        for(auto &kf : map_frame->poseRel)
        {
            RelativePose relative_pose;
            relative_pose.T_c_r = kf.second.Tcr;
            relative_pose.info = kf.second.info;
            relative_pose.is_loop = kf.second.isLoop;
            frame.relative_poses.emplace(kf.first->kfId, std::move(relative_pose));
        }

        for (auto i : map_frame->window_frames) {
            frame.window_frames.push_back(i);
        }

        frame.center_pts = map_frame->centers_pts;

        // create all points for this frame
        for (size_t i = 0; i < map_frame->features.size(); i++) {
            const auto& feat = map_frame->features[i];

            Point point;

            point.is_corner = feat->isCorner;
            point.pos_undist = feat->uv;

            if (feat->isCorner) {
                point.feature_index = frame.descriptors.size();
                std::bitset<256> des;
                for (int k = 0; k < 32; k++){
                    unsigned char ch = feat->descriptor[k];
                    int n_offset =  k * 8;
                    for (int j = 0; j<8 ; j++)
                    {
                        des.set(n_offset+j, ch & (1 << j));
                    }
                }
                frame.descriptors.emplace_back(std::move(des));
                frame.angles.emplace_back(feat->angle);
                frame.feature_point_indices.emplace_back(i);
            }

            if (feat->status == ldso::Feature::FeatureStatus::IMMATURE) {
                point.status = Point::Immature;
            } else if (feat->status == ldso::Feature::FeatureStatus::OUTLIER) {
                point.status = Point::ImmatureOutlier;
            } else if (feat->point->status == ldso::Point::OUT || feat->point->status == ldso::Point::OUTLIER) {
                CHECK_EQ(feat->status, ldso::Feature::FeatureStatus::VALID);
                point.status = Point::ActiveOutlier;
            } else {
                CHECK_EQ(feat->status, ldso::Feature::FeatureStatus::VALID);
                CHECK(feat->point->status == ldso::Point::ACTIVE || feat->point->status == ldso::Point::MARGINALIZED);
                point.status = Point::Active;
                const auto* map_point = feat->point.get();

                // scale inverse depth to world

                point.idepth = feat->invD;
                point.idepth_opti = feat->invD / scale_w_c;

                // unproject point with optimized intrinsics; since we reproject it anyway, no need to scale with depth
                const Vec3 pos_cam_scaled = Vec3(Hcalib.mpCH->fxli() * feat->uv[0] + Hcalib.mpCH->cxli(),
                                                 Hcalib.mpCH->fyli() * feat->uv[1] + Hcalib.mpCH->cyli(),
                                                 1);

                // inverse distance (unlike inverse depth used by DSO)
                point.idist = point.idepth / pos_cam_scaled.norm();
                point.idist_opti = point.idepth_opti / pos_cam_scaled.norm();

                // We don't use feat->uv, since we want the point coordinates in the original distorted image
                // so we first project the 3d point using the initial calibration parameter undist.getK(), since
                // distortCoordinates uses the same to unproject the point, and then reprojects using the original
                // calibration and distortion parameters.
                Vec3 proj = undist.getK() * pos_cam_scaled;
                Vec2f uv_dist = (proj / proj.z()).topRows(2).cast<float>();
                undist.distortCoordinates(&uv_dist.x(), &uv_dist.y(), &point.pos_dist.x(), &point.pos_dist.y(), 1);

                // active residuals
                if (map_point->mpPH) {
                    for (const auto& fph : map_point->mpPH->residuals) {
                        CHECK_EQ(fph->host.lock()->frameID, frame.kfid);         // check host frame id matches
                        CHECK_LE(fph->target.lock()->frameID, map.NumFrames());  // check target frame is valid kfid
                        point.target_frames.push_back(id_from_kfid.at(fph->target.lock()->frameID));
                    }
                }

                // dropped residuals
                for (int target_id : map_point->droppedTargetFrameIds) {
                    point.target_frames.push_back(id_from_kfid.at(target_id));
                }

            }

            // append point to frame
            frame.points.emplace_back(std::move(point));
        }

        // append frame to list of frames
        cameras.emplace_back(std::move(frame));
    }

}

Inside FullSystem, it's created like:

PhotoBAInitializationLDSO init;
init.create_from_map(*globalMap, undist, *Hcalib);

I do it after everything is complete. Not sure what you might have to watch out for if you do it while the odometry or PGO is still running.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants