如何提高 OCR 的准确性?

技术讨论 三井 ⋅ 于 6天前 ⋅ 342 阅读
来源:原创 小白 小白学视觉

OCR代表光学字符识别,将文档照片或场景照片转换为机器编码的文本。有很多工具可以在你们的系统中实现OCR,例如Tesseract OCR和Cloud Vision。他们使用AI和机器学习以及经过训练的自定义模型。文本识别取决于多种因素,以产生高质量的输出。OCR输出在很大程度上取决于输入图像的质量,这就是每个OCR引擎都提供有关输入图像质量及其大小的准则的原因,这些准则可帮助OCR引擎产生准确的结果。

图像预处理功能可以提高输入图像的质量,以便OCR引擎为我们提供准确的输出,使用以下图像处理操作可以改善输入图像的质量。

图像缩放

图像缩放比例对于图像分析很重要。通常,OCR引擎会准确输出300 DPI的图像。DPI描述了图像的分辨率,换句话说,它表示每英寸的打印点数。

def set_image_dpi(file_path):

def set_image_dpi(file_path):
im = Image.open(file_path)
length_x, width_y = im.size
factor = min(1, float(1024.0 / length_x))
size = int(factor length_x), int(factor width_y)
im_resized = im.resize(size, Image.ANTIALIAS)
temp_file = tempfile.NamedTemporaryFile(delete=False, suffix='.png')
temp_filename = temp_file.name
im_resized.save(temp_filename, dpi=(300, 300))
return temp_filenam
file

偏斜矫正

歪斜图像定义为不直的文档图像。歪斜的图像会直接影响OCR引擎的行分割,从而降低其准确性。我们需要执行以下步骤来更正文本倾斜。

1.检测图像中歪斜的文本块

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 10, 50)
cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:5]
screenCnt = None
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
screenCnt = approx
break

cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)

]file
2.计算旋转角度

3.旋转图像以校正歪斜

pts = np.array(screenCnt.reshape(4, 2) * ratio)

pts = np.array(screenCnt.reshape(4, 2) * ratio)
warped = four_point_transform(orig, pts)
def order_points(pts):

initialzie a list of coordinates that will be ordered

# such that the first entry in the list is the top-left,
# the second entry is the top-right, the third is the
# bottom-right, and the fourth is the bottom-left
rect = np.zeros((4, 2), dtype="float32")

# the top-left point will have the smallest sum, whereas
# the bottom-right point will have the largest sum
s = pts.sum(axis=1)
rect[0] = pts[np.argmin(s)]
rect[2] = pts[np.argmax(s)]

# now, compute the difference between the points, the
# top-right point will have the smallest difference,
# whereas the bottom-left will have the largest difference
diff = np.diff(pts, axis=1)
rect[1] = pts[np.argmin(diff)]
rect[3] = pts[np.argmax(diff)]

# return the ordered coordinates
return rect

def four_point_transform(image, pts):

obtain a consistent order of the points and unpack them

# individually
rect = order_points(pts)
(tl, tr, br, bl) = rect

# compute the width of the new image, which will be the
# maximum distance between bottom-right and bottom-left
# x-coordiates or the top-right and top-left x-coordinates
widthA = np.sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))
widthB = np.sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))
maxWidth = max(int(widthA), int(widthB))

# compute the height of the new image, which will be the
# maximum distance between the top-right and bottom-right
# y-coordinates or the top-left and bottom-left y-coordinates
heightA = np.sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))
heightB = np.sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))
maxHeight = max(int(heightA), int(heightB))

# now that we have the dimensions of the new image, construct
# the set of destination points to obtain a "birds eye view",
# (i.e. top-down view) of the image, again specifying points
# in the top-left, top-right, bottom-right, and bottom-left
# order
dst = np.array([
    [0, 0],
    [maxWidth - 1, 0],
    [maxWidth - 1, maxHeight - 1],
    [0, maxHeight - 1]], dtype="float32")

# compute the perspective transform matrix and then apply it
M = cv2.getPerspectiveTransform(rect, dst)
warped = cv2.warpPerspective(image, M, (maxWidth, maxHeight))
return warped

file

二值化

通常,OCR引擎会在内部进行二值化处理,因为它们可以处理黑白图像。最简单的方法是计算阈值,然后将所有像素转换为白色,且其值高于阈值,其余像素转换为黑色。


除噪或降噪

噪点是图像像素之间颜色或亮度的随机变化。噪声会降低图像中文本的可读性。噪声有两种主要类型:盐椒噪声和高斯噪声。

def remove_noise_and_smooth(file_name):

def remove_noise_and_smooth(file_name):
img = cv2.imread(file_name, 0)
filtered = cv2.adaptiveThreshold(img.astype(np.uint8), 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 41)
kernel = np.ones((1, 1), np.uint8)
opening = cv2.morphologyEx(filtered, cv2.MORPH_OPEN, kernel)
closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)
img = image_smoothening(img)
or_image = cv2.bitwise_or(img, closing)
return or_image


相关推荐:
适用于深度学习的 15 种最佳 OCR 数据集

成为第一个点赞的人吧 :bowtie:
回复数量: 0
暂无回复~
您需要登陆以后才能留下评论!