Media Playing down the LiDAR path



Fig 1 – Showing a sequence of LiDAR Profiles as a video clip

Video codecs harnessed to Silverlight MediaPlayer make a useful compression technique for large image sets. The video, in essence, is a pointer into a large library of image frames. Data collected in a framewise spatial sense can leverage this technique. One example where this can be useful is in Mobile Asset Collection or MAC UIs. MAC corridors can create large collections of image assets in short order.

Another potential for this approach is to make use of conventional LiDAR point cloud profiles. In this project I wanted to step through a route, collecting profiles for later viewing. The video approach seemed feasible after watching cached profiles spin through a view panel connected to a MouseWheel event in another project. With a little bit of effort I was able to take a set of stepwise profiles, turn them into a video, and then connect the resulting video to a Silverlight Map Control client hooked up to a Silverlight MediaPlayer. This involved three steps:

1. Create a set of frames

Here is a code snippet used to follow a simple track down Colfax Ave in Denver, west to east. I used this to repeatedly grab LidarServer WMS GetProfileData requests, and save the resulting .png images to a subdirectory. The parameters were set to sweep at a 250ft offset either side of the track with a10ft depth and a 1 ft step interval. The result after a couple of hours was 19164 .png profiles at 300px X 500px.

The code basically starts down the supplied path and calculates the sweep profile endpoints at each step using the Perpendicular function. This sweep line supplies the extent parameters for a WMS GetProfileData request.

private void CreateImages(object sender, RoutedEventArgs e)
{
    string colorization = "Elevation";
    string filter = "All";
    string graticule = "True";
    string drapeline = "False";
    double profileWidth = 300;
    double profileHeight = 500;

    double dx = 0;
    double dy = 0;
    double len = 0;
    double t = 0;
    string profileUrl = null;
    WebClient client = new WebClient();

    step = 57.2957795 * (step * 0.3048 / 6378137.0); //approx dec degree
    Point startpt = new Point();
    Point endpt = new Point();
    string[] lines = points.Text.Split('\r');
    startpt.X = double.Parse(lines[0].Split(',')[0]);
    startpt.Y = double.Parse(lines[0].Split(',')[1]);

    endpt.X = double.Parse(lines[1].Split(',')[0]);
    endpt.Y = double.Parse(lines[1].Split(',')[1]);

    dx = endpt.X - startpt.X;
    dy = endpt.Y - startpt.Y;
    len = Math.Sqrt(dx * dx + dy * dy);

    Line direction = new Line();
    direction.X1 = startpt.X;
    direction.Y1 = startpt.Y;
    width *= 0.3048;
    int cnt = 0;
    t = step / len;

    while (t <= 1)
    {
        direction.X2 = startpt.X + dx * t;
        direction.Y2 = startpt.Y + dy * t;

        Point p0 = Perpendicular(direction, width / 2);
        Point p1 = Perpendicular(direction, -width / 2);

        p0 = Mercator(p0.X, p0.Y);
        p1 = Mercator(p1.X, p1.Y);

        profileUrl = "http://www.lidarserver.com/drcog?SERVICE=WMS&VERSION=1.3&REQUEST=GetProfileView"+
                          "&FORMAT=image%2Fpng&EXCEPTIONS=INIMAGE&CRS=EPSG:3785"+
                         "&LEFT_XY=" + p0.X + "%2C" + p0.Y + "&RIGHT_XY=" + p1.X + "%2C" + p1.Y +
                         "&DEPTH=" + depth + "&SHOW_DRAPELINE=" + drapeline + "&SHOW_GRATICULE=" + graticule +
                         "&COLORIZATION=" + colorization + "&FILTER=" + filter +
                         "&WIDTH=" + profileWidth + "&HEIGHT=" + profileHeight;

        byte[] bytes = client.DownloadData(new Uri(profileUrl));
        FileStream fs = File.Create(String.Format(workingDir+"img{0:00000}.png", cnt++));
        BinaryWriter bw = new BinaryWriter(fs);
        bw.Write(bytes);
        bw.Close();
        fs.Close();

        direction.X1 = direction.X2;
        direction.Y1 = direction.Y2;
        t += step / len;
    }
}

private Point Perpendicular(Line ctrline, double dist)
{
    Point pt = new Point();
    Point p1 = Mercator(ctrline.X1, ctrline.Y1);
    Point p2 = Mercator(ctrline.X2, ctrline.Y2);

    double dx = p2.X - p1.X;
    double dy = p2.Y - p1.Y;
    double len = Math.Sqrt(dx * dx + dy * dy);
    double e = dist * (dx / len);
    double f = dist * (dy / len);

    pt.X = p1.X - f;
    pt.Y = p1.Y + e;
    pt = InverseMercator(pt.X, pt.Y);
    return pt;
}

2. Merge the .png frames into an AVI

Here is a helpful C# AviFile library wrapper. Even though it is a little old, the functions I wanted in this wrapper worked just fine. The following WPF project simply takes a set of png files and adds them one at a time to an avi clip. Since I chose the (Full)uncompressed option, I had to break my files into smaller sets to keep from running into the 4Gb limit on my 32bit system. In the end I had 7 avi clips to cover the 19,164 png frames.


Fig 2 – Create an AVI clip from png frames

using System.Drawing;
using System.Windows;
using AviFile;

namespace CreateAVI
{

    public partial class Window1 : Window
    {
        public Window1()
        {
            InitializeComponent();
        }

        private void btnWrite_Click(object sender, RoutedEventArgs e)
        {
            int startframe = int.Parse(start.Text);
            int frameInterval = int.Parse(interval.Text);
            double frameRate = double.Parse(fps.Text);
            int endframe = 0;

            string currentDirName = inputDir.Text;
            string[] files = System.IO.Directory.GetFiles(currentDirName, "*.png");
            if (files.Length > (startframe + frameInterval)) endframe = startframe + frameInterval;
            else endframe = files.Length;
            Bitmap bmp = (Bitmap)System.Drawing.Image.FromFile(files[startframe]);
            AviManager aviManager = new AviManager(@currentDirName + outputFile.Text, false);
            VideoStream aviStream = aviManager.AddVideoStream(true, frameRate, bmp);

            Bitmap bitmap;
            int count = 0;
            for (int n = startframe+1; n < endframe; n++)
            {
                if (files[n].Trim().Length > 0)
                {
                    bitmap = (Bitmap)Bitmap.FromFile(files[n]);
                    aviStream.AddFrame(bitmap);
                    bitmap.Dispose();
                    count++;
                }
            }
            aviManager.Close();
        }
    }
}

Next I used Microsoft Expression Encoder 3 to encode the set of avi files into a Silverlight optimized VC-1 Broadband variable bitrate wmv output, which expects a broadband connection for an average 1632 Kbps download. The whole path sweep takes about 12.5 minutes to view and 53.5Mb of disk space. I used a 25fps frame rate when building the avi files. Since the sweep step is 1ft this works out to about a 17mph speed down my route.

3. Add the wmv in a MediaPlayer and connect to Silverlight Map Control.

MediaPlayer

I used a similar approach for connecting a route path to a video described in “Azure Video and the Silverlight Path”. Expression Encoder 3 comes with a set of Silverlight MediaPlayers templates. I used the simple “SL3Standard” template in this case, but you can get fancier if you want.

Looking in the Expression Templates subdirectory “C:\Program Files\Microsoft Expression\Encoder 3\Templates\en”, select the ExpressionMediaPlayer.MediaPlayer template you would like to use. All of the templates start with a generic.xaml MediaPlayer template. Add the .\Source\MediaPlayer\Themes\generic.xaml to your project. Then look through this xaml for <Style TargetType=”local:MediaPlayer”>. Once a key name is added, this plain generic style can be referenced by MediaPlayer in the MainPage.xaml
<Style x:Key=”MediaPlayerStyle” TargetType=”local:MediaPlayer”>

<ExpressionMediaPlayer:MediaPlayer
   x:Name=”VideoFile”
  Width=”432″ Height=”720″
  Style=”{StaticResource MediaPlayerStyle}”
/>

It is a bit more involved to add one of the more fancy templates. It requires creating another ResourceDictionary xaml file, adding the styling from the template Page.xaml and then adding both the generic and the new template as merged dictionaries:

<ResourceDictionary.MergedDictionaries>
  <ResourceDictionary Source=”generic.xaml”/>
  <ResourceDictionary Source=”BlackGlass.xaml”/>
</ResourceDictionary.MergedDictionaries>

Removing unnecessary controls, like volume controls, mute button, and misc controls, involves finding the control in the ResourceDictionary xaml and changing Visibility to Collapsed.

Loading Route to Map

The list of node points at each GetProfileData frame was captured in a text file in the first step. This file is added as an embedded resource that can be loaded at initialization. Since there are 19164 nodes, the MapPolyline is reduced using only modulus 25 nodes resulting in a more manageable 766 node MapPolyline. The full node list is still kept in a routeLocations Collection. Having the full node list available helps to synch with the video timer. This video is encoded at 25fps so I can relate any video time to a node index.

private List<location> routeLocations = new List<location>();

 private void LoadRoute()
 {
     MapPolyline rte = new MapPolyline();
     rte.Name = "route";
     rte.Stroke = new SolidColorBrush(Colors.Blue);
     rte.StrokeThickness = 10;
     rte.Opacity = 0.5;
     rte.Locations = new LocationCollection();

     Stream strm = Assembly.GetExecutingAssembly().GetManifestResourceStream("OnTerra_MACCorridor.corridor.direction.txt");

     string line;
     int cnt = 0;
     using (StreamReader reader = new StreamReader(strm))
     {
         while ((line = reader.ReadLine()) != null)
         {
             string[] values = line.Split(',');
             Location loc = new Location(double.Parse(values[0]), double.Parse(values[1]));
             routeLocations.Add(loc);
             if ((cnt++) % 25 == 0) rte.Locations.Add(loc);// add node every second
         }
     }
     profileLayer.Children.Add(rte);
 }

A Sweep MapPolyline is also added to the map with an event handler for MouseLeftButtonDown. The corresponding MouseMove and MouseLeftButtonUp events are added to the Map Control, which sets up a user drag capability. Every MouseMove event calls a FindNearestPoint(LL, routeLocations) function which returns a Location and updates the current routeIndex. This way the user sweep drag movements are locked to the route and the index is available to get the node point at the closest frame. This routeIndex is used to update the sweep profile end points to the new locations.

Synchronzing MediaPlayer and Sweep location

From the video perspective a DispatcherTimer polls the video MediaPlayer time position every 500ms. The MediaPlayer time position returned in seconds is multiplied by the frame rate of 25fps giving the routeLocations node index, which is used to update the sweep MapPolyline locations.

In reverse, a user drags and drops the sweep line at some point along the route. The MouseMove keeps the current routeIndex updated so that the mouse up event can change the sweep locations to its new location on the route. Also in this MouseLeftButtonUp event handler the video position is updated dividing the routeIndex by the frame rate.
VideoFile.Position = routeIndex/frameRate;

Summary

Since Silverlight handles media as well as maps it’s possible to take advantage of video codecs as a sort of compression technique. In this example, all of the large number of frame images collected from a LiDAR point cloud are readily available in a web mapping interface. Connecting the video timer with a node collection makes it relatively easy to keep map position and video synchronized. The video is a pointer into the large library of LiDAR profile image frames.

From a mapping perspective, this can be thought of as a raster organizational pattern, similar in a sense to tile pyramids. In this case, however, a single time axis is the pointer into a large image set, while with tile pyramids 3 axis pointers access the image library with zoom, x, and y. In either case visually interacting with a large image library enhances the human interface. My previous unsuccessful experiment with video pyramids attempted to combine a serial time axis with the three tile pyramid axis.I still believe this will be a reality sometime.

Of course there are other universes than our earth’s surface. It seems reasonable to think dynamic visualizer approaches could be extended to other large domains. Perhaps Bioinformatics could make use of tile pyramids and video codecs to explore Genomic or Proteomic topologies. It would be an interesting investigation.

Bioinformatics is a whole other world. While we are playing around in “mirror land” these guys are doing the “mirror us.”



Fig 1 – MediaPlayer using BlackGlass template

This entry was posted in Uncategorized by admin. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>