Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Function to parse warpables.json into a multi-sequencer request in GCloud #10

Closed
jywarren opened this issue May 1, 2019 · 79 comments
Closed

Comments

@jywarren
Copy link
Member

jywarren commented May 1, 2019

We now have a basic multi-sequencer running in GCloud, although it's struggling with some bugs and performance issues -- see #6. Once we address those, we'll need a function which can fetch a warpables.json file (or receive it as JSON) and initiate a multi-sequencer call to GCloud.

Here is an example map with multiple images:

https://mapknitter.org/maps/ceres--2/ \ https://mapknitter.org/maps/ceres--2/warpables.json

Here is one with just a single very very small image, for initial testing (it should run much faster):

https://mapknitter.org/maps/pvdtest/ / https://mapknitter.org/maps/pvdtest/warpables.json

So, we will have JSON in this format (ref publiclab/mapknitter-exporter-sinatra#1 (comment)):

{
  'images': [
    {
    "cm_per_pixel": 4.99408,
    "nodes": [ 
      {"lat":"-37.7664063648","lon":"144.9828654528"}, // id is also optional here
      {"lat":"-37.7650239004","lon":"144.9831980467"},
      {"lat":"-37.7652020107","lon":"144.9844533205"},
      {"lat":"-37.7665844718","lon":"144.9841207266"}
    ],
    "src":"https://s3.amazonaws.com/grassrootsmapping/warpables/306187/DJI_1207.JPG",
    }
  ]
}

And we'll want to generate a request in a format like this (the sequence is wrong but it does include webgl-distort, and overlays to composite multiple images):

https://us-central1-public-lab.cloudfunctions.net/is-function-edge/api/v1/process/?steps=[{"id":1,"input":"https://i.publiclab.org/i/31778.png","steps":"webgl-distort{nw:0,101|ne:1023,-51|se:1223,864|sw:100,764}"},{"id":2,"input":1,"steps":"import-image{url:https://i.publiclab.org/i/31778.png},webgl-distort{nw:0,101|ne:1023,-51|se:1223,864|sw:100,764},overlay"}]

https://us-central1-public-lab.cloudfunctions.net/is-function-edge/api/v1/process/?steps=
[
  {
    "id":1,
    "input":"https://i.publiclab.org/i/31778.png",
    "steps":"webgl-distort{nw:0,101|ne:1023,-51|se:1223,864|sw:100,764}"
  },
  {
    "id":2,
    "input":1,
    "steps":"import-image{url:https://i.publiclab.org/i/31778.png},webgl-distort{nw:0,101|ne:1023,-51|se:1223,864|sw:100,764},overlay"
  }
]
@jywarren
Copy link
Member Author

jywarren commented May 1, 2019

For starters, we can simply use lat/lon and scale them to use as pixel values. But a follow-up step will require using converters from lat/lon to x,y; there are a few available, such as:

In any of these cases, we'd still want to find the lowest x,y values and use those as the origin, so we'd subtract those from all other coords to get the final pixel x,y coords. And we'd apply a scaling value based on cm_per_px

This whole portion I can help with later, we don't have to worry too much about it; but it does correspond to this ruby code:

https://github.com/publiclab/mapknitter-exporter/blob/6c2fc0671944fc2c673fc7eb0b68a88e46ca3882/lib/mapknitterExporter.rb#L53-L69

@tech4GT
Copy link
Member

tech4GT commented May 20, 2019

@jywarren I am not very familiar with how webgl based functions work, it would be great if you can guide me here or point me to some resources as to how we get from the lat,long in the input json to the inputs for the webgl-distort module which are x,y coordinates for all four corners
Sorry if this is basic stuff, I am just new to this 😅

@tech4GT
Copy link
Member

tech4GT commented May 20, 2019

So what I understand by reading the links mentioned above is that we are trying to take the spherical lat/long values and convert them to x/y values on a plane. One thing I am not sure of is what is our reference point while calculating the new positions, I mean x,y values with respect to what point?? I would really appreciate any help here @jywarren @icarito 😄

@tech4GT
Copy link
Member

tech4GT commented May 20, 2019

Okay so looking at the ruby version here https://github.com/publiclab/mapknitter-exporter/blob/6c2fc0671944fc2c673fc7eb0b68a88e46ca3882/lib/mapknitterExporter.rb#L53-L69
It appears to me that we are calculating the absolute minimum and maximum of the x and y values and using the combinations of these to represent the four corners with respect to which we distort. Is this correct?

@tech4GT
Copy link
Member

tech4GT commented May 20, 2019

Okay this kinda makes sense now! @jywarren I got down most of this, just need a little help with how to use the scaling factor (cm per pixel)

@jywarren
Copy link
Member Author

conversion from the lat/lon to actual module inputs for the webgl-distort. So the understanding that I have developed is that we subtract the minimum value of the lat/lon from all 4 points to get the relative coordinates in degrees. We can the use the formula (radius of the earth)*(Angle in radians) to get the actual distance. We should then divide this with the cm_per_pixel value to generate the actual coordinates. But the issue is this value is coming out to be far greater than the height and width of the image mentioned in the JSON.

how we get from the lat,long in the input json to the inputs for the webgl-distort module which are x,y coordinates for all four corners

Yes, so:

  1. the images are stored with lat/lon values which "kind of" correspond to x and y, if you consider the north pole to be 0,0 😄 💈 ⛄️ It's actually OK to build the function assuming this, and not worry about converting to a planar coordinate space, in the first version.
  2. yes, so for the purposes of distortion, we actually don't need to worry about planar x,y origin besides just that we look through ALL points in ALL images we're dealing with and just find the lowest lat and lon, treating that as our x,y (minlat, minlon) origin.
  3. then, all points are treated as relative to that origin. As i mention in step 1, in the very first version you can simply take the difference in lat and lon and use those values -- don't worry about converting to a planar coordinate scheme -- they'll come out a bit flattened because lat and lon aren't equal except at the equator, but that's OK for starters
  4. so just use a scale factor to convert a given lat or lon value to pixel space directly
  5. maxlon and maxlat will let you figure out your total canvas size, with maxlat - minlat being your height, and maxlon - minlon being your width

In the final version we can circle back and convert lat and lon to planar x,y values so that the images aren't flattened. That'll require the conversion functions i mentioned in this comment.

To be honest, i kind of forget how cm_per_pixel relates directly to scale. But we can look that up; I think the value on this line:

https://github.com/publiclab/mapknitter/blob/main/lib/exporter.rb#L63

is from the mercator projection conversion function: https://www.baeldung.com/java-convert-latitude-longitude

OK, so scale is the same as the cm_per_pixel: https://github.com/publiclab/mapknitter-exporter/blob/6c2fc0671944fc2c673fc7eb0b68a88e46ca3882/lib/mapknitterExporter.rb#L348

But pixels per meter or pxperm is the inverse, divided by 100: https://github.com/publiclab/mapknitter-exporter/blob/6c2fc0671944fc2c673fc7eb0b68a88e46ca3882/lib/mapknitterExporter.rb#L66

Does this help? Sorry, cartography is always a bit confusing!!!

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

@jywarren Yeah, one more question though, how do I use pxperm to scale if the distance we have is not is meters but in degrees, I mean lon/lat values even relative to a point are gonna be in degrees right? So do we use the formula I mentioned above (radius of the earth)*(relative value of lat/lon)
So I'll push in the first draft using this formula: x = widht*(lon-minLon)/(maxLon-minLon)
Until we can clarify how to use the cm_per_px 😄

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

Alright! @jywarren Try this https://us-central1-public-lab.cloudfunctions.net/is-function-edge/?steps=[{%22id%22:1,%22input%22:%22https://s3.amazonaws.com/grassrootsmapping/warpables/312456/test.png%22,%22steps%22:%22canvas-resize{},webgl-distort{nw%3A57%252C0%7Cne%3A214%252C35%7Cse%3A133%252C166%7Csw%3A0%252C134}%22}]

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

So to avoid image being cropped we need to resize the canvas first, now pushing in the converter I used!

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

Alright!
@jywarren I have deployed a function on the cloud which is a proof of concept for the conversion, it takes the wrapable.json file address as a query and redirects to the correct url. You can check this out here:
https://us-central1-public-lab.cloudfunctions.net/is-convert/?url=https://mapknitter.org/maps/pvdtest/warpables.json

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

🎉

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

I think the next step here will be to start to bring all this together and stitch all the images together using canvas-resize and overlay! 😃

@jywarren
Copy link
Member Author

Ok cool! Running this on a 3-image map with larger images, I see:

https://us-central1-public-lab.cloudfunctions.net/is-convert/?url=https://mapknitter.org/maps/ceres--2/warpables.json

download

@jywarren
Copy link
Member Author

Oh actually it was three images, so maybe images are getting lost as the subsequent images are pasted over them? But it definitely warped all three!

@jywarren
Copy link
Member Author

For comparison ceres--2 in mapknitter looks like:
57237398-1ca8b580-6ff5-11e9-8a85-bc3b0d43b562

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

@jywarren Awesome, So tomorrow I'll try to manually generate a sequence to export a 3 image map and if that goes well we can start writing a script to automate that!

@tech4GT
Copy link
Member

tech4GT commented May 22, 2019

@jywarren Also, as the output of the 3 image url you mentioned earlier, I tried it and it should give something like this..
Screen Shot 2019-05-22 at 9 28 49 PM

Actually right now I have just aligned the images separately side by side rather than overlaying them onto a single canvas. Maybe you need to adjust the zoom to see all 3?

@jywarren
Copy link
Member Author

jywarren commented May 22, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 23, 2019

@jywarren I am facing an issue and I think this relates to the internal structure of webgl-distort.
So the final corner points are coming out to be different from the inputs. Are the inputs not the exact final corner points like
(nw,ne)
(sw,se)
??

@jywarren
Copy link
Member Author

jywarren commented May 23, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 23, 2019

I'll take a look!
So for the test image I used the following values
(40,0|214,45)
(0,104|169,166)
with image width being 214 and height being 166
this is the result that I got:
image_0
image_1

@jywarren
Copy link
Member Author

jywarren commented May 23, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 23, 2019

@jywarren Is there someone else who would know more about this? Maybe someone from the mapknitter team?
I'll try to go through the mapknitter ruby version code for the time being. 😄

@jywarren
Copy link
Member Author

jywarren commented May 23, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 23, 2019

Actually, my question is that the final coordinates are coming out to be very different from what we are putting in as input, and I can't think of a reason why 😅

@tech4GT
Copy link
Member

tech4GT commented May 23, 2019

Actually not just the order, the corner points are entirely different from the input values.

@jywarren
Copy link
Member Author

jywarren commented May 23, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 23, 2019

Oh, do you mean in the image sequencer module code?

@jywarren
Copy link
Member Author

Yeah like where do you mean you're passing values in and seeing different values out? Can you help me reproduce what you're seeing? Thanks!

@tech4GT
Copy link
Member

tech4GT commented May 26, 2019

Also, can you please suggest a different map with multiple images which are smaller in size? Thanks!

@jywarren
Copy link
Member Author

jywarren commented May 26, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 26, 2019

Oh, great! I'll start writing up the script for compositing sequence.

@tech4GT
Copy link
Member

tech4GT commented May 26, 2019

Alright! @jywarren I think I have this pretty much down, I am running into one final issue where after warping the overlay step does not overlay correctly, some part is cropped off..this however does not happen with the canvas resize, the code however is essentially the same. I guess I would have to dig a little deeper but I think we are pretty close here! 😄
import-image webgl-distort canvas-resize import-image webgl-distort overlay using this
image_6

@jywarren
Copy link
Member Author

jywarren commented May 26, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 26, 2019

Alright! I found the error and fixed it, a little mistake in the overlay module(I'll push it one the main is repo later), now the images line up perfectly but the extra space around the warped image is black, which covers the previous image on overlay, we may have to write a new overlay module specifically for this use case but glad to see this work!
image_6

@tech4GT
Copy link
Member

tech4GT commented May 27, 2019

Okay so I used a little trick just to see if everything works correctly, I excluded completely black points from getting overlaid. The result is perfect!
image_6
@jywarren 🎉
All that needs to be done for this now would be to make the subpart-overlay module and we are ready for deployment!

@tech4GT
Copy link
Member

tech4GT commented May 27, 2019

Also, I am noticing that single sequences are running faster than equivalent multi step sequence! I think the added overhead of creating sequencer instances is actually more than the time we save!

@tech4GT
Copy link
Member

tech4GT commented May 27, 2019

Alright! I have deployed a working exporter to the cloud which gives the correct result locally but the function runs out of memory in the cloud! Here is the final result locally: (A screenshot, the whole image is too large!!)
Screen Shot 2019-05-27 at 5 49 44 PM
@jywarren All we need now is to deploy on a platform with more memory and this should give us a basic exporter! 🎉
I have made a special overlay module for this(different from the one in the core sequencer) and it is consumed via npm in our app.
Some things still need work like the canvas is a little small to accommodate all the images(I think this is happening because the given corners for distortion and the final result are off by 100-200 pixels)
But still this is a start 😄

@jywarren
Copy link
Member Author

jywarren commented May 27, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 27, 2019

Sure! Docker is already set on the project and we should be able to set this up fairly easily! I'll commit the changes I made to the Api, and then we ahould be able to easily export as a docker container.

@tech4GT
Copy link
Member

tech4GT commented May 27, 2019

@jywarren One thing I wanted to run by you was that right now I have reduced the Api to run a single sequence since that was faster, but let's also keep the old code, it would help in parallelizing the process. 😄

@tech4GT
Copy link
Member

tech4GT commented May 27, 2019

Alright! Updated the API and added a /api/v1/convert/?url=wrapables.json route which gives the converted url!
Now working on the canvas size adjustment.
To set this up all we need to do is clone this repo and export the docker image using the docker cli commands! :D
@jywarren @icarito

@jywarren
Copy link
Member Author

I have reduced the Api to run a single sequence since that was faster, but let's also keep the old code, it would help in parallelizing the process. 😄

Yes, I think the parallel processing may be very important as file size and count scale up, no?

url= can be remote, right?

@jywarren
Copy link
Member Author

jywarren commented May 27, 2019

This function (referencing issue title) is now at https://github.com/publiclab/image-sequencer-app/blob/main/src/util/converter.js

And accessed via this new route: af29486

@tech4GT
Copy link
Member

tech4GT commented May 28, 2019

@jywarren yeah parallel processing would be very important! But I want to get the basic version running, after which we can use the cluster api for parallelizing.

@tech4GT
Copy link
Member

tech4GT commented May 28, 2019

I'll push in the canvas size fix today for sure! :D

@tech4GT
Copy link
Member

tech4GT commented May 28, 2019

Okay pushed in the revised converter which gives some extra canvas space to prevent images from getting cropped! I tried it out locally with the ceres--2 map and it works fine
We are ready to deploy as a first draft!
Once this is working in the cloud, next steps would be to make optimizations and add the clustered api parts!
Screen Shot 2019-05-28 at 9 41 22 AM

@tech4GT
Copy link
Member

tech4GT commented May 28, 2019

Alright! @jywarren I have added preliminary parallelization in the multiSequencer file, do take a look! I am just using one core of the CPU right now but it is already so much faster!!! 🎉
I'll deploy this as a separate cloud function for you to try out!

@tech4GT
Copy link
Member

tech4GT commented May 28, 2019

It's still running into memory constraints in the cloud functions!
So to sum up:

  • We have a basic linear implementation and a converter for it(It is the default in the app right now, ie if you clone and run npm run start the linear version will run)
  • We have a parallelized implementation which is much faster!(To run it you can clone and run npm run start-multi ) 😄
    cc @jywarren @icarito

@tech4GT
Copy link
Member

tech4GT commented May 29, 2019

@jywarren I think there is some problem with the way I have implemented this function, I tried increasing the scaling factor in the function and it just broke! The Images don't line up If I use a different height and width! Maybe you can take a look to see if you see anything wrong? 😅

@tech4GT tech4GT reopened this May 29, 2019
@tech4GT
Copy link
Member

tech4GT commented May 29, 2019

Also @jywarren I have added the tests and the readme, left comments in the code and cleaned it up.
So this is the final piece! Once we get this to work we are ready to deploy!

@tech4GT
Copy link
Member

tech4GT commented May 29, 2019

basically the problem is that the images are being overlaid differently for different canvas sizes!
The formula I have used to get the distortion positions is:
(node.x - minLon)*width/(maxLon-minLon) and similarly for y .
Then the overlay position is the minX and minY for all four corners of one image.
This is only working with some values of width and height! In other cases the overlay is off!! Since I am not really familiar with how webgl-distort works internally I would really appreciate any help here :D

@jywarren
Copy link
Member Author

jywarren commented May 29, 2019 via email

@tech4GT
Copy link
Member

tech4GT commented May 29, 2019

Sure! Thanks a ton! 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants