Как я могу преобразовать изображение так, чтобы проецируемое изображение было таким же, как оригинал - PullRequest
1 голос
/ 07 декабря 2011

Постановка проблемы: Изображение A проецируется через проектор, проходит через микроскоп, и проецируемое изображение захватывается с помощью камеры через тот же микроскоп, что и изображение B. Из-за оптических элементов B поворачивается, сдвигается и искажаетсяЧто касается A. Теперь мне нужно преобразовать A в A 'перед проецированием, чтобы B было как можно ближе к A.

Первоначальный подход: я взял рисунок шахматной доски и повернул его под разными углами (36, 72, 108, ... 324 градусов) и проецируется для получения серии изображений A и B.Я использовал функции OpenCV CalibrateCamera2, InitUndistortMap и Remap для преобразования B в B '.Но B 'находится далеко от A и довольно похож на B (особенно это касается значительного количества поворотов и сдвигов, которые не исправляются).

Код (на Python) приведен ниже.Я не уверен, что делаю глупости.Есть идеи для правильного подхода?

import pylab
import os
import cv
import cv2
import numpy

# angles - the angles at which the picture was rotated 
angles = [0, 36, 72, 108, 144, 180, 216, 252, 288, 324]
# orig_files - list of original picture files used for projection
orig_files =  ['../calibration/checkerboard/orig_%d.png' % (angle) for angle in angles]
# img_files - projected image captured by camera
img_files = ['../calibration/checkerboard/imag_%d.bmp' % (angle) for angle in angles]
# Load the images 
images = [cv.LoadImage(filename) for filename in img_files]
orig_images = [cv.LoadImage(filename) for filename in orig_files]

# Convert to grayscale
gray_images = [cv.CreateImage((src.height, src.width), cv.IPL_DEPTH_8U, 1) for src in images]
for ii in range(len(images)):
    cv.CvtColor(images[ii], gray_images[ii], cv.CV_RGB2GRAY)
gray_orig = [cv.CreateImage((src.height, src.width), cv.IPL_DEPTH_8U, 1) for src in orig_images]
for ii in range(len(orig_images)):
    cv.CvtColor(orig_images[ii], gray_orig[ii], cv.CV_RGB2GRAY)

# The number of ranks and files in the chessboard. OpenCV considers
# the height and width of the chessboard to be one less than these,
# respectively.
rank_count = 11
file_count = 10

# Try to detect the corners of the chessboard. For each image,
# FindChessboardCorners returns (found, corner_points). found is True
# even if it managed to detect only a subset of the actual corners.
img_corners = [cv.FindChessboardCorners(img, (rank_count-1, file_count-1)) for img in gray_images]
orig_corners = [cv.FindChessboardCorners(img, (rank_count-1,file_count-1)) for img in gray_orig]

# The total number of corners will be (rank_count-1)x(file_count-1),
# but if some parts of the image are too blurred/distorted,
# FindChessboardCorners detects only a subset of the corners. In that
# case, DrawChessboardCorners will raise a TypeError.
orig_corner_success = []
ii = 0
for (found, corners) in orig_corners:
    if found and (len(corners) == (rank_count - 1) * (file_count - 1)):
        orig_corner_success.append(ii)
    else:
        print orig_files[ii], ': could not find correct corners: ', len(corners)
    ii += 1
ii = 0
img_corner_success = []
for (found, corners) in img_corners:
    if found and (len(corners) == (rank_count-1) * (file_count-1)) and (ii in orig_corner_success):
        img_corner_success.append(ii)
    else:
        print img_files[ii], ': Number of corners detected is wrong:', len(corners)
    ii += 1

# Here we compile all the corner coordinates into single arrays    
image_points = []
obj_points = []
for ii in img_corner_success:
    obj_points.extend(orig_corners[ii][1])
    image_points.extend(img_corners[ii][2])        
image_points = cv.fromarray(numpy.array(image_points, dtype='float32'))
obj_points = numpy.hstack((numpy.array(obj_points, dtype='float32'), numpy.zeros((len(obj_points), 1), dtype='float32')))
obj_points = cv.fromarray(numpy.array(obj_points, order='C'))

point_counts = numpy.ones((len(img_corner_success), 1), dtype='int32') * ((rank_count-1) * (file_count-1))
point_counts = cv.fromarray(point_counts)
# Create the output parameters
cam_mat = cv.CreateMat(3, 3, cv.CV_32FC1)
cv.Set2D(cam_mat, 0, 0, 1.0)
cv.Set2D(cam_mat, 1, 1, 1.0)
dist_mat = cv.CreateMat(5, 1, cv.CV_32FC1)
rot_vecs = cv.CreateMat(len(img_corner_success), 3, cv.CV_32FC1)
tran_vecs = cv.CreateMat(len(img_corner_success), 3, cv.CV_32FC1)
# Do the camera calibration
x = cv.CalibrateCamera2(obj_points, image_points, point_counts, cv.GetSize(gray_images[0]), cam_mat, dist_mat, rot_vecs, tran_vecs)
# Create the undistortion map
xmap = cv.CreateImage(cv.GetSize(images[0]), cv.IPL_DEPTH_32F, 1)
ymap = cv.CreateImage(cv.GetSize(images[0]), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(cam_mat, dist_mat, xmap, ymap)
# Now undistort all the images and same them
ii = 0
for tmp in images:
    print img_files[ii]
    image = cv.GetImage(tmp)
    t = cv.CloneImage(image)
    cv.Remap(t, image, xmap, ymap, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
    corrected_file = os.path.join(os.path.dirname(img_files[ii]), 'corrected_%s' % (os.path.basename(img_files[ii])))
    cv.SaveImage(corrected_file, image)
    print 'Saved corrected image to', corrected_file
    ii += 1

Вот изображения - A, B и B 'На самом деле я не думаю, что Remap действительно что-то делает!

A. Original Image B. Captured Image B'. Undistorted Image

1 Ответ

0 голосов
/ 16 декабря 2011

Я наконец решил это.Было несколько проблем:

  1. Исходные изображения были не одного размера.Также не было захваченных изображений.Хинс, аффинное преобразование из одной пары не применимо к другой.Я изменил их размеры до одного и того же размера.
  2. Недостаточный режим после калибровки камеры недостаточен для поворотов и сдвига.Подходящая вещь - аффинное преобразование.И лучше взять три угла шахматной доски в качестве точек для вычисления матрицы преобразования (меньше относительной ошибки).

Вот мой рабочий код (я преобразовываю исходные изображения и сохраняю их, чтобы показать, что вычисленная матрица преобразования в документе отображает оригинал на захваченное изображение):

import pylab
import os
import cv
import cv2
import numpy

global_object_points = None
global_image_points = None
global_captured_corners = None
global_original_corners = None
global_success_index = None

global_font = cv.InitFont(cv.CV_FONT_HERSHEY_PLAIN, 1.0, 1.0)

def get_camera_calibration_data(original_image_list, captured_image_list, board_width, board_height):
    """Get the map for undistorting projected images by using a list of original chessboard images and the list of images that were captured by camera.

    original_image_list - list containing the original images (loaded as OpenCV image).

    captured_image_list - list containing the captured images.

    board_width - width of the chessboard (number of files - 1)

    board_height - height of the chessboard (number of ranks - 1)

    """
    global global_object_points
    global global_image_points
    global global_captured_corners
    global global_original_corners
    global global_success_index
    print 'get_undistort_map'
    corner_count = board_width * board_height
    # Try to detect the corners of the chessboard. For each image,
    # FindChessboardCorners returns (found, corner_points). found is
    # True even if it managed to detect only a subset of the actual
    # corners.  NOTE: according to
    # http://opencv.willowgarage.com/wiki/documentation/cpp/calib3d/findChessboardCorners,
    # no need for FindCornerSubPix after FindChessBoardCorners
    captured_corners = [cv.FindChessboardCorners(img, (board_width, board_height)) for img in captured_image_list]
    original_corners = [cv.FindChessboardCorners(img, (board_width, board_height)) for img in original_image_list]
    success_captured = [index for index in range(len(captured_image_list))
                        if captured_corners[index][0] and len(captured_corners[index][1]) == corner_count]
    success_original = [index for index in range(len(original_image_list))
                        if original_corners[index][0] and len(original_corners[index][2]) == corner_count]
    success_index = [index for index in success_captured if (len(captured_corners[index][3]) == corner_count) and (index in success_original)]
    global_success_index = success_index
    print global_success_index
    print 'Successfully found corners in image #s.', success_index
    cv.NamedWindow('Image', cv.CV_WINDOW_AUTOSIZE)
    for index in success_index:
        copy = cv.CloneImage(original_image_list[index])
        cv.DrawChessboardCorners(copy, (board_width, board_height), original_corners[index][4], corner_count)
        cv.ShowImage('Image', copy)
        a = cv.WaitKey(0)
        copy = cv.CloneImage(captured_image_list[index])
        cv.DrawChessboardCorners(copy, (board_width, board_height), captured_corners[index][5], corner_count)
        cv.ShowImage('Image', copy)
        a = cv.WaitKey(0)
    cv.DestroyWindow('Image')
    if not success_index:
        return
    global_captured_corners = [captured_corners[index][6] for index in success_index]
    global_original_corners = [original_corners[index][7] for index in success_index]
    object_points = cv.CreateMat(len(success_index) * (corner_count), 3, cv.CV_32FC1)
    image_points = cv.CreateMat(len(success_index) * (corner_count), 2, cv.CV_32FC1)
    global_object_points = object_points
    global_image_points = image_points
    point_counts = cv.CreateMat(len(success_index), 1, cv.CV_32SC1)
    for ii in range(len(success_index)):        
        for jj in range(corner_count):
            cv.Set2D(object_points, ii * corner_count + jj, 0, float(jj/board_width))
            cv.Set2D(object_points, ii * corner_count + jj, 1, float(jj%board_width))
            cv.Set2D(object_points, ii * corner_count + jj, 2, float(0.0))
            cv.Set2D(image_points, ii * corner_count + jj, 0, captured_corners[success_index[ii]][8][jj][0])
            cv.Set2D(image_points, ii * corner_count + jj, 1, captured_corners[success_index[ii]][9][jj][10])
        cv.Set1D(point_counts, ii, corner_count)
    # Create the output parameters    
    camera_intrinsic_mat = cv.CreateMat(3, 3, cv.CV_32FC1)
    cv.Set2D(camera_intrinsic_mat, 0, 0, 1.0)
    cv.Set2D(camera_intrinsic_mat, 1, 1, 1.0)
    distortion_mat = cv.CreateMat(5, 1, cv.CV_32FC1)
    rotation_vecs = cv.CreateMat(len(success_index), 3, cv.CV_32FC1)
    translation_vecs = cv.CreateMat(len(success_index), 3, cv.CV_32FC1)
    print 'Before camera clibration'
    # Do the camera calibration
    cv.CalibrateCamera2(object_points, image_points, point_counts, cv.GetSize(original_image_list[0]), camera_intrinsic_mat, distortion_mat, rotation_vecs, translation_vecs)
    return (camera_intrinsic_mat, distortion_mat, rotation_vecs, translation_vecs)

if __name__ == '__main__':
    # angles - the angles at which the picture was rotated 
    angles = [0, 36, 72, 108, 144, 180, 216, 252, 288, 324]
    # orig_files - list of original picture files used for projection
    orig_files =  ['../calibration/checkerboard/o_orig_%d.png' % (angle) for angle in angles]
    # img_files - projected image captured by camera
    img_files = ['../calibration/checkerboard/captured_imag_%d.bmp' % (angle) for angle in angles]

    # orig_files = ['o%d.png' % (angle) for angle in range(10, 40, 10)]
    # img_files = ['d%d.png' % (angle) for angle in range(10, 40, 10)]
    # Load the images
    print 'Loading images'
    captured_images = [cv.LoadImage(filename) for filename in img_files]
    orig_images = [cv.LoadImage(filename) for filename in orig_files]
    # Convert to grayscale
    gray_images = [cv.CreateImage((src.height, src.width), cv.IPL_DEPTH_8U, 1) for src in captured_images]
    for ii in range(len(captured_images)):
        cv.CvtColor(captured_images[ii], gray_images[ii], cv.CV_RGB2GRAY)
        cv.ShowImage('win', gray_images[ii])
        cv.WaitKey(0)
    cv.DestroyWindow('win')
    gray_orig = [cv.CreateImage((src.height, src.width), cv.IPL_DEPTH_8U, 1) for src in orig_images]
    for ii in range(len(orig_images)):
        cv.CvtColor(orig_images[ii], gray_orig[ii], cv.CV_RGB2GRAY)

    # The number of ranks and files in the chessboard. OpenCV considers
    # the height and width of the chessboard to be one less than these,
    # respectively.
    rank_count = 10
    file_count = 11
    camera_intrinsic_mat, distortion_mat, rotation_vecs, translation_vecs, = get_camera_calibration_data(gray_orig, gray_images, file_count-1, rank_count-1)
    xmap = cv.CreateImage(cv.GetSize(captured_images[0]), cv.IPL_DEPTH_32F, 1)
    ymap = cv.CreateImage(cv.GetSize(captured_images[0]), cv.IPL_DEPTH_32F, 1)
    cv.InitUndistortMap(camera_intrinsic_mat, distortion_mat, xmap, ymap)
    # homography = cv.CreateMat(3, 3, cv.CV_32F)
    map_matrix = cv.CreateMat(2, 3, cv.CV_32F)
    source_points = (global_original_corners[0][0], global_original_corners[0][file_count-2], global_original_corners[0][(rank_count-1) * (file_count-1) -1])
    image_points = (global_captured_corners[0][0], global_captured_corners[0][file_count-2], global_captured_corners[0][(rank_count-1) * (file_count-1) -1])
    # cv.GetPerspectiveTransform(source, target, homography)
    cv.GetAffineTransform(source_points, image_points, map_matrix)
    ii = 0
    cv.NamedWindow('OriginaImage', cv.CV_WINDOW_AUTOSIZE)
    cv.NamedWindow('CapturedImage', cv.CV_WINDOW_AUTOSIZE)
    cv.NamedWindow('FixedImage', cv.CV_WINDOW_AUTOSIZE)
    for image in gray_images:
        # The affine transform should be ideally calculated once
        # outside this loop, but as the transform looks different for
        # each image, I'll just calculate it independently to see the
        # applicability
        try:
            # Try to find ii in the list of successful corner
            # detection indices and if found, use the corners for
            # computing the affine transformation matrix. This is only
            # required when the optics changes between two
            # projections, which should not happend.
            jj = global_success_index.index(ii)
            source_points = [global_original_corners[jj][0], global_original_corners[jj][rank_count-1], global_original_corners[jj][-1]]
            image_points = [global_captured_corners[jj][0], global_captured_corners[jj][rank_count-1], global_captured_corners[jj][-1]]
            cv.GetAffineTransform(source_points, image_points, map_matrix)
            print '---------------------------------------------------------------------'
            print orig_files[ii], '<-->', img_files[ii]
            print '---------------------------------------------------------------------'
            for kk in range(len(source_points)):
                print source_points[kk]
                print image_points[kk]
        except ValueError:
            # otherwise use the last used transformation matrix
            pass

        orig = cv.CloneImage(orig_images[ii])        
        cv.PutText(orig, '%s: original' % (os.path.basename(orig_files[ii])), (100, 100), global_font, 0.0)
        cv.ShowImage('OriginalImage', orig)
        target = cv.CloneImage(image)
        target.origin = image.origin
        cv.SetZero(target)
        cv.Remap(image, target, xmap, ymap, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
        cv.PutText(target, '%s: remapped' % (os.path.basename(img_files[ii])), (100, 100), global_font, 0.0)
        cv.ShowImage('CapturedImage', target)
        target = cv.CloneImage(orig_images[ii])
        cv.SetZero(target)
        cv.WarpAffine(orig_images[ii], target, map_matrix, cv.CV_INTER_LINEAR | cv.CV_WARP_FILL_OUTLIERS)
        corrected_file = os.path.join(os.path.dirname(img_files[ii]), 'corrected_%s' % (os.path.basename(img_files[ii])))
        cv.SaveImage(corrected_file, target)
        print 'Saved corrected image to', corrected_file
        # cv.WarpPerspective(image, target, homography, cv.CV_INTER_LINEAR | cv.CV_WARP_INVERSE_MAP | cv.CV_WARP_FILL_OUTLIERS)        
        cv.PutText(target, '%s: perspective-transformed' % (os.path.basename(img_files[ii])), (100, 100), global_font, 0.0)
        cv.ShowImage('FixedImage', target)
        print '==================================================================='
        cv.WaitKey(0)
        ii += 1
    cv.DestroyWindow('OriginalImage')
    cv.DestroyWindow('CapturedImage')
    cv.DestroyWindow('FixedImage')

И изображения:

Оригинал: original image

Захваченное изображение: captured image

Аффинно преобразованное исходное изображение: affine transformed original

Теперьобратное преобразование, примененное к исходному изображению, должно решить проблему.

...