// We filter below by discarding all correspondence that are a certain distance apart.
// We could also (or in addition to) discard the worst 5% of the distances or something like that.
// Filter and store the image (edge) points with their corresponding vertex id:
vector<int>vertex_indices;
vector<cv::Vec2f>image_points;
assert(occluding_vertices.size()==idx_d.size());
for(inti=0;i<occluding_vertices.size();++i)
{
autoortho_scale=rendering_parameters.get_screen_width()/rendering_parameters.get_frustum().r;// This might be a bit of a hack - we recover the "real" scaling from the SOP estimate
if(idx_d[i].second<=distance_threshold*ortho_scale)// I think multiplying by the scale is good here and gives us invariance w.r.t. the image resolution and face size.
{
autoedge_point=image_edges[idx_d[i].first];
// Store the found 2D edge point, and the associated vertex id:
return{current_meshs,rendering_params};// I think we could also work with a Mat face_instance in this function instead of a Mesh, but it would convolute the code more (i.e. more complicated to access vertices).
// THIS ONE IS EXPENSE 12ms, but more the inner function generic_dense.. is expense
MatrixXfbasis_rows_=morphable_model.get_shape_model().get_rescaled_pca_basis_at_point(vertex_ids[k][i]);// In the paper, the orthonormal basis might be used? I'm not sure, check it. It's even a mess in the paper. PH 26.5.2014: I think the rescaled basis is fine/better.
MatrixXfbasis_rows_=morphable_model.get_shape_model().get_rescaled_pca_basis_at_point(vertex_ids[k][i]);// In the paper, the orthonormal basis might be used? I'm not sure, check it. It's even a mess in the paper. PH 26.5.2014: I think the rescaled basis is fine/better.
* Fits the shape of a Morphable Model to given 2D landmarks (i.e. estimates the maximum likelihood solution of the shape coefficients) as proposed in [1].
* It's a linear, closed-form solution fitting of the shape, with regularisation (prior towards the mean).
*
* [1] O. Aldrian & W. Smith, Inverse Rendering of Faces with a 3D Morphable Model, PAMI 2013.
*
* Note: Using less than the maximum number of coefficients to fit is not thoroughly tested yet and may contain an error.
* Note: Returns coefficients following standard normal distribution (i.e. all have similar magnitude). Why? Because we fit using the normalised basis?
* Note: The standard deviations given should be a vector, i.e. different for each landmark. This is not implemented yet.
*
* @param[in] morphable_model The Morphable Model whose shape (coefficients) are estimated.
* @param[in] affine_camera_matrix A 3x4 affine camera matrix from model to screen-space (should probably be of type CV_32FC1 as all our calculations are done with float).
* @param[in] landmarks 2D landmarks from an image to fit the model to.
* @param[in] vertex_ids The vertex ids in the model that correspond to the 2D points.
* @param[in] base_face The base or reference face from where the fitting is started. Usually this would be the models mean face, which is what will be used if the parameter is not explicitly specified.
* @param[in] lambda The regularisation parameter (weight of the prior towards the mean). Gets normalized by the number of images given.
* @param[in] num_coefficients_to_fit How many shape-coefficients to fit (all others will stay 0). Should be bigger than zero, or boost::none to fit all coefficients.
* @param[in] detector_standard_deviation The standard deviation of the 2D landmarks given (e.g. of the detector used), in pixels.
* @param[in] model_standard_deviation The standard deviation of the 3D vertex points in the 3D model, projected to 2D (so the value is in pixels).
* @return The estimated shape-coefficients (alphas).
.get_rescaled_pca_basis_at_point(vertex_id);// In the paper, the orthonormal basis might be used? I'm not sure, check it. It's even a mess in the paper. PH 26.5.2014: I think the rescaled basis is fine/better.