• A Spark groupByKey() function is finally run to
produce each block. Each block will contain the
block id and the list of cameras in the block.
3.6 Point cloud generation
In this stage the main goal is to produce sparse
point clouds from each associated block using SfM
techniques. While there are numerous approaches
to SfM, a detailed discussion of the specifics of the
methods is beyond the scope of this paper. However,
we will briefly outline the SfM approach used in our
approach although many other approaches can be
taken.
At this point we will find that each block has
a list of cameras and their respective camera data.
This camera data can be used like in a traditional
SfM pipeline to produce a sparse point cloud. In our
approach, a conventional approach is taken to, where
the detected features are matched between two corre-
sponding cameras and processed to get the essential
matrix of the initial pose of the 2 initial cameras.
The essential matrix can then be decomposed to get
four separate solution which can be disambiguated to
one solution. This solution can be used to initially
triangulate a sparse point cloud. Other cameras are
then registered using a Perspective-n-Point (PnP)
based camera registration. And thus, each block can
be processed out into a point cloud.
Once the block partitioning is completed, each
block will have a list of cameras and their corre-
sponding camera data. This camera data can be
used in a conventional SfM pipeline to produce a
sparse point cloud. In our approach, two cameras
are taken as seed cameras and an initial pose is
estimated. To produce this initial pose estimation,
the detected features are matched between the two
corresponding cameras, and the essential matrix of
the initial pose is computed. The essential matrix
is then decomposed into four separate solutions,
which are disambiguated to obtain one solution.
This solution is used to initially triangulate a sparse
point cloud. Subsequently, the other cameras are
registered using Perspective-n-Point (PnP) based
camera registration. This process is run parallelly for
each block, resulting in a point cloud for each block.
These point clouds will be the output for this stage
3.7 Collection and Merging
Once the block partitioning is complete the sparse
point clouds produced by each subset can be merged
using several methods such as with a least squares
solution where two point clouds with overlapping 3d
points are minimized such to minimize the squared
distance produced by trying to fit the reference point
cloud to the target point cloud. Another approach,
as proposed in [5] involves transforming a reference
block to match the targeted block using the 3D space
similarity transformation model. The transformation
parameters are calculated based on the shared points
between the two sub-blocks.
In our approach, the reference blocks seed cam-
era’s positions and rotations are taken from the target
blocks overlapping cameras and the reference block
is transformed accordingly using the 3D space sim-
ilarity transformation model. This method requires
the overlap to exceed two cameras in order to be
effective. Should the overlap be less than two, an
alternative approach utilizing the Perspective-n-Point
(PnP) technique must be utilized to determine the
position and rotation of the seed camera, prior to
transformation.
In the case of the overlap greater than two this
overlap can be used to scale each of the subsequent
point clouds to the first point clouds scale using an
affine transformation. The transformation in detail
means, if we have the base point cloud Ain it’s
own chunk with it’s camera’s Aa, Ba, Ca, Daand we
have point cloud B in the neighbouring chunk with
it’s camera’s Cb, Db, Eb, Fb, where Caand Dais the
same camera as Cband Dbrespectively as the chunks
overlap cameras. We can figure out the transforma-
tion needed to bring point cloud B’s points and cam-
era’s to be at the same scale as rotation as A by figur-
ing out an affine transformation Tthat can be solved
for by solving AT=T∗BTwhere ATis the base-
line from Cato Daand BTis the baseline from Cb
to Da. This Tis then used to transform the entire
point cloud Binto the same space as Aand then can
be appended onto Aby replacing Cawith Cband Da
with Dband their newly transformed points. Note that
all the calculations are done in Homogeneous coordi-
nates. Doing this process to all the subsequent point
clouds should yield a fully connected graph of cam-
era’s and full point cloud.
4 Experiments are Results
The experiments conducted in this section will be
done to provide evidence that our approach works
and is a feasible solution. However, it is important
to note that we do not compare the time efficiency
of our approach to other methods, as we were not
able to access a spark cluster for performance testing
at the time of writing. Any performance test done
on a standalone Spark instance would not be a fair
comparison and thus will not be carried out. The
experiments will include 3D reconstructing point
clouds for the observatory and statue data-sets using
WSEAS TRANSACTIONS on SYSTEMS and CONTROL
DOI: 10.37394/23203.2023.18.60
L. A. H. Naurunna, S. C. Premaratne,
T. N. D. S. Ginige