Hi, first thanks for your time.

I am trying to draw something on the NFT tracking object, and I want to draw it exactly the same size with the target.

In KPM Result we can get the pose, so I can draw it at the right place.

But how can I figure out how large to draw? to match the size of the target on the screen.

Thanks!

## How to get drawing size of the nft match result

### Re: How to get drawing size of the nft match result

1. you need to know the actual size of the image ( in millimeters )

1) if the image itself contains DPI information, just read it. I read HorizontalPixelSize and VerticalPixelSize, width = widthInPixels * HorizontalPixelSize and height = heightInPixels * VerticalPixelSize. save width and height for later usage.

2) otherwise, when generating fset/fset3/iset files, genTexData will ask you about image DPI, pixelSize = 25.4 / DPI, thenc repeat calculation in 1).

2. calculate corner coordinates

1) you already know how to draw at the right place, nice! transformation would be the camera matrix (projection matrix) multiplied by pose matrix (modelview matrix).

2) bottom-left corner is at (0, 0, 0), bottom-right (width, 0, 0), top-right (width, height, 0), top-left(0, height, 0), know these coordinates, transform with the matrix in 1).

example:

draw a dot at the center of the image => transform vertex(width/2, height/2, 0) with camera matrix and pose matrix, and draw at that pixel.

draw a 2-millimeter high cuboid covering the image => render a cube model (center at (0,0,0), size 1mm), with transformation: camera * pose * scale(width, height, 2) * translate(0.5, 0.5, 0.5).

1) if the image itself contains DPI information, just read it. I read HorizontalPixelSize and VerticalPixelSize, width = widthInPixels * HorizontalPixelSize and height = heightInPixels * VerticalPixelSize. save width and height for later usage.

2) otherwise, when generating fset/fset3/iset files, genTexData will ask you about image DPI, pixelSize = 25.4 / DPI, thenc repeat calculation in 1).

2. calculate corner coordinates

1) you already know how to draw at the right place, nice! transformation would be the camera matrix (projection matrix) multiplied by pose matrix (modelview matrix).

2) bottom-left corner is at (0, 0, 0), bottom-right (width, 0, 0), top-right (width, height, 0), top-left(0, height, 0), know these coordinates, transform with the matrix in 1).

example:

draw a dot at the center of the image => transform vertex(width/2, height/2, 0) with camera matrix and pose matrix, and draw at that pixel.

draw a 2-millimeter high cuboid covering the image => render a cube model (center at (0,0,0), size 1mm), with transformation: camera * pose * scale(width, height, 2) * translate(0.5, 0.5, 0.5).

### Re: How to get drawing size of the nft match result

After reading fset/iset (ar2ReadSurfaceSet), you may read width/height from surfaceSet:

(code from iOS example ARMarkerNFT.m, but you can also use it as C++ code)

(code from iOS example ARMarkerNFT.m, but you can also use it as C++ code)

Code: Select all

`if (surfaceSet->surface && surfaceSet->surface[0].imageSet && surfaceSet->surface[0].imageSet->scale) {`

AR2ImageT *image = surfaceSet->surface[0].imageSet->scale[0]; // Assume best scale (largest image) is first entry in array scale[index] (index is in range [0, surfaceSet->surface[0].imageSet->num - 1]).

marker_width = image->xsize * 25.4f / image->dpi;

marker_height = image->ysize * 25.4f / image->dpi;

}

Return to “AR software design and development”

### Who is online

Users browsing this forum: No registered users and 1 guest