Creative Commons License This work is licensed under a Creative Commons Attribution 2.5 License.
/*====================================================================*
 -  Copyright (C) 2001 Leptonica.  All rights reserved.
 -
 -  Redistribution and use in source and binary forms, with or without
 -  modification, are permitted provided that the following conditions
 -  are met:
 -  1. Redistributions of source code must retain the above copyright
 -     notice, this list of conditions and the following disclaimer.
 -  2. Redistributions in binary form must reproduce the above
 -     copyright notice, this list of conditions and the following
 -     disclaimer in the documentation and/or other materials
 -     provided with the distribution.
 -
 -  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
 -  ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
 -  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
 -  A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL ANY
 -  CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 -  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 -  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 -  PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
 -  OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
 -  NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 -  SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 *====================================================================*/

README (version 1.85.0)
File update: Oct 16 2024
---------------------------

gunzip leptonica-1.85.0.tar.gz
tar -xvf leptonica-1.85.0.tar


Building leptonica
I/O libraries leptonica is dependent on
Generating documentation using doxygen
Developing with leptonica
What's in leptonica?


Building leptonica

1. Top view

  This tar includes:
    (1) src: library source and function prototypes for building liblept
    (2) prog: source for regression test, usage example programs, and
        sample images
  for building on these platforms:
     -  Linux on x86 (i386) and AMD 64 (x64)
     -  OSX (both powerPC and x86).
     -  Cygwin, msys and mingw on x86
  There is an additional zip file for building with MS Visual Studio.

  Libraries, executables and prototypes are easily made, as described below.

  When you extract from the archive, all files are put in a
  subdirectory 'leptonica-1.85.0'.  In that directory you will
  find a src directory containing the source files for the library,
  and a prog directory containing source files for various
  testing and example programs.

2. Building on Linux/Unix/MacOS

  The software can be downloaded from either a release tar file or
  from the current head of the source.  For the latter, go to a directory
  and clone the tree into it (note the '.' at the end):
     cd [some directory]
     git clone https://github.com/DanBloomberg/leptonica.git .

  There are three ways to build the library:

    (1) By customization:  Use the existing static makefile,
        src/makefile.static and customize the build by setting flags
        in src/environ.h.  See details below.
        Note: if you are going to develop with leptonica, the static
        makefiles are useful.

    (2) Using autoconf (supported by James Le Cuirot).
        Run ./configure in this directory to
        build Makefiles here and in src.  Autoconf handles the
        following automatically:
            * architecture endianness
            * enabling Leptonica I/O image read/write functions that
              depend on external libraries (if the libraries exist)
            * enabling functions for redirecting formatted image stream
              I/O to memory (on Linux only)
        After running ./configure: make; make install.  There's also
        a 'make check' for testing.

    (3) Using cmake (supported by Egor Pugin).
        The build must always be in a different directory from the root
        of the source (here).  It is common to build in a subdirectory
        of the root.  From the root directory, do this:
            mkdir build
            cd build
          Then to make only the library:
            cmake ..
            make
          To make both the library and the programs:
            cmake .. -DBUILD_PROG=1
            make
        To clean out the current build, just remove everything in
        the build subdirectory.

  In more detail for these three methods:

    (1) Customization using the static makefiles:

       * FIRST THING: Run make-for-local.  This simply renames
               src/makefile.static  -->  src/makefile
               prog/makefile.static -->  prog/makefile
         [Note: the autoconf build will not work if you have any files
          named "makefile" in src or prog.  If you've already run
          make-for-local and renamed the static makefiles, and you then
          want to build with autoconf, run make-for-auto to rename them
          back to makefile.static.]

       * You can customize for:
         (a) Including Leptonica image I/O functions that depend on external
             libraries, such as libpng.  Use environment variables in
             src/environ.h, such as HAVE_LIBPNG.
         (b) Disabling the GNU functions for redirecting formatted stream I/O
             to memory.  By default, HAVE_FMEMOPEN is enabled in src/environ.h.
         (c) Using special memory allocators (see src/environ.h).
         (d) Changing compile and runtime defaults for messages to stderr.
             The default in src/environ.h is to output info, warning and
             error messages.
         (e) Specifying the location of the object code.  By default it
             goes into a tree whose root is also the parent of the src
             and prog directories.  This can be changed using the
             ROOT_DIR variable in makefile.

       * Build the library:
         - To make an optimized version of the library (in src):
               make
         - To make a debug version of the library (in src):
               make DEBUG=yes debug
         - To make a shared library version (in src):
               make SHARED=yes shared
         - To make the prototype extraction program (in src):
               make   (to make the library first)
               make xtractprotos

       * To use shared libraries, you need to include the location of
         the shared libraries in your LD_LIBRARY_PATH.  For example,
         after you have built programs with 'make SHARED=yes' in the
         prog directory, you need to tell the programs where the shared
         libraries are:
             export LD_LIBRARY_PATH=../lib/shared:$LD_LIBRARY_PATH

       * Make the programs in prog/ (after you have make liblept):
         - Customize the makefile by setting ALL_LIBS to link the
           external image I/O libraries.  By default, ALL_LIBS assumes that
           libtiff, libjpeg and libpng are available.
         - To make an optimized version of all programs (in prog):
               make
         - To make a debug version of all programs (in prog):
               make DEBUG=yes
         - To make a shared library version of all programs (in prog):
               make SHARED=yes
         - To run the programs, be sure to set
             export LD_LIBRARY_PATH=../lib/shared:$LD_LIBRARY_PATH

    (2) Building using autoconf  (Thanks to James Le Cuirot)

       * If you downloaded from a release tar, it will be "configure ready".
       * If you cloned from the git master tree, you need to make the
         configure executable.  To do this, run
            autogen.sh.

       Use the standard incantation, in the root directory (the
       directory with configure):
          ./configure    [build the Makefile]
          make   [builds the library and shared library versions of
                  all the progs]
          make install  [as root; this puts liblept.a into /usr/local/lib/
                         and 13 of the progs into /usr/local/bin/ ]
          make [-j2] check  [runs the alltests_reg set of regression tests.
                             This works even if you build in a different
                             place from the distribution. The -j parameter
                             should not exceed half the number of cores.
                     NOTE: If the test fails, it's likely due to a race
                           condition.  Rerun with 'make check']

       Configure supports installing in a local directory (e.g., one that
       doesn't require root access).  For example, to install in $HOME/local,
           ./configure --prefix=$HOME/local/
           make install
       For different ways to build and link leptonica with tesseract, see
           https://github.com/tesseract-ocr/tesseract/wiki/Compiling
       In brief, using autotools to build tesseract and then install it
       in $HOME/local (after installing leptonica there), do the
       following from your tesseract root source directory:
            ./autogen.sh
            LIBLEPT_HEADERSDIR=$HOME/local/include ./configure \
               --prefix=$HOME/local/ --with-extra-libraries=$HOME/local/lib
            make install

       Configure also supports building in a separate directory from the
       source.  Run "/(path-to)/leptonica-1.85.0/configure" and then "make"
       from the desired build directory.

       Configure has a number of useful options; run "configure --help" for
       details.  If you're not planning to modify the library, adding the
       "--disable-dependency-tracking" option will speed up the build.  By
       default, both static and shared versions of the library are built.  Add
       the "--disable-shared" or "--disable-static" option if one or the other
       isn't needed.  To skip building the programs, use "--disable-programs".

       By default, the library is built with debugging symbols.  If you do not
       want these, use "CFLAGS=-O2 ./configure" to eliminate symbols for
       subsequent compilations, or "make CFLAGS=-O2" to override the default
       for compilation only.  Another option is to use the 'install-strip'
       target (i.e., "make install-strip") to remove the debugging symbols
       when the library is installed.

       Finally, if you find that the installed programs are unable to link
       at runtime to the installed library, which is in /usr/local/lib,
       try to run configure in this way:
           LDFLAGS="-Wl,-rpath -Wl,/usr/local/lib" ./configure
       which causes the compiler to pass those options through to the linker.

       For the Debian distribution, out of all the programs in the prog
       directory, we only build a small subset of general purpose
       utility programs.  This subset is the same set of programs that
       'make install' puts into /usr/local/bin.  It has no dependency on
       the image files that are bundled in the prog directory for testing.

    (3) Using cmake

       The usual method is to build in a directory that is a subdirectory
       of the root.  First do this from the root directory:
            mkdir build
            cd build

       The default build (shared libraries, no debug, only the library)
       is made with
            cmake ..
       For other options, you can use these flags on the cmake line:
       * To make a static library:
            cmake .. -DBUILD_SHARED_LIBS=OFF
            make
       * To make a dynamic library (default) and STATIC (or builtin) dependencies:
            cmake .. -DSW_BUILD_SHARED_LIBS=0
            make
       * To build with debug:
            cmake .. -DCMAKE_BUILD_TYPE=Debug
            make
       * To make both the library and the programs:
            cmake .. -DBUILD_PROG=1
            make

       The programs are put in build/bin/
       To run these (e.g., for testing), move them to the prog
       directory and run them from there:
          cd bin
          mv * ../../prog/
          cd ../../prog
          alltests_reg generate
          alltests_reg compare

       To build the library directly from the root directory instead of
       the build subdirectory:
          mkdir build
          cmake -H . -Bbuild   (-H means the source directory,
                                -B means the directory for the build
          make

3. Building on Windows

   (a) Building with Visual Studio

       1. Download the latest SW
              (Software Network https://software-network.org/)
          client from https://software-network.org/client/
       2. Unpack it, add to PATH.
       3. Run once to perform cmake integration:
          sw setup
       4. Run:
          git clone https://github.com/danbloomberg/leptonica
          cd leptonica
          mkdir build
          cd build
          cmake ..
       5. Build a solution (leptonica.sin) in your Visual Studio version.

   (b) Building for mingw32 with MSYS
       (Thanks to David Bryan)

       MSYS is a Unix-compatible build environment for the Windows-native
       mingw32 compiler.  Selecting the "mingw-developer-toolkit",
       "mingw32-base", and "msys-base" packages during installation will allow
       building the library with autoconf as in (2) above.  It will also allow
       building with the static makefile as in (1) above if this option is used
       in the make command line:

         CC='gcc -std=c99 -U__STRICT_ANSI__'

       Only the static library may be built this way; the autoconf method must
       be used if a shared (DLL) library is desired.

       External image libraries (see below) must be downloaded separately,
       built, and installed before building the library.  Pre-built libraries
       are available from the ezwinports project.

   (c) Building for Cygwin
       (Thanks to David Bryan)

       Cygwin is a Unix-compatible build and runtime environment.  Adding the
       "binutils", "gcc-core", and "make" packages from the "Devel" category and
       the "diffutils" package from the "Utils" category to the packages
       installed by default will allow building the library with autoconf as in
       (2) above.  Pre-built external image libraries are available in the
       "Graphics" and "Libs" categories and may be selected for installation.
       If the libraries are not installed into the /lib, /usr/lib, or
       /usr/local/lib directories, you must run make with the
       "LDFLAGS=-L/(path-to-image)/lib" option.  Building may also be performed
       with the static makefile as in (1) above if this option is used in the
       make command:

         CC='gcc -std=c99 -U__STRICT_ANSI__'

       Only the static library may be built this way; the autoconf method must
       be used if a shared (DLL) library is desired.

4. Building and running oss-fuzz programs

   The oss-fuzz programs are in prog/fuzzing/.  They are run by oss-fuzz
   on a continual basis with random inputs.  clang-10, which is required
   to build these programs, can be installed using the command
       sudo apt-get install clang-10

   Stefan Weil has provided this method for building the fuzzing programs.
   From your github root:
       ./autogen.sh    (to make configure)
       mkdir -p bin/fuzzer
       cd bin/fuzzer
   Run configure to generate the Makefiles:
     address sanitizer issue:
       ../../configure CC=clang-10 CXX=clang++-10 CFLAGS="-g -O2 \
       -D_GLIBCXX_DEBUG -fsanitize=fuzzer-no-link,address,undefined" \
       CXXFLAGS="-g -O2 -D_GLIBCXX_DEBUG \
       -fsanitize=fuzzer-no-link,address,undefined"
     memory sanitizer issue:
       ../../configure CC=clang-10 CXX=clang++-10 CFLAGS="-g -O2 \
       -D_GLIBCXX_DEBUG -fsanitize=fuzzer-no-link,memory,undefined" \
       CXXFLAGS="-g -O2 -D_GLIBCXX_DEBUG \
       -fsanitize=fuzzer-no-link,memory,undefined"
   Build:
     address sanitizer issue:
       make fuzzers CXX=clang++-10 CXXFLAGS="-g -O2 -D_GLIBCXX_DEBUG \
       -fsanitize=fuzzer,address,undefined"
     memory sanitizer issue:
       make fuzzers CXX=clang++-10 CXXFLAGS="-g -O2 -D_GLIBCXX_DEBUG \
       -fsanitize=fuzzer,memory,undefined"

   When an oss-fuzz issue has been created, download the Reproducer
   Testcase file (e.g, name it "/tmp/payload").  To run one of the
   fuzzing executables in bin/fuzzer, e.g., pix4_fuzzer:
       cd ../../prog/fuzzing
       ../../bin/fuzzer/pix4_fuzzer /tmp/payload

5. The 270+ programs in the prog directory are an integral part of this
   package.  They can be divided into four groups:

   (1) Programs that are useful applications for running on the
       command line.  They can be installed from autoconf builds
       using 'make install'.  Examples of these are the PostScript
       and pdf conversion programs: converttopdf, converttops,
       convertfilestopdf, convertfilestops, convertsegfilestopdf,
       convertsegfilestops, imagetops, printimage and printsplitimage.

   (2) Programs that are used as regression tests in alltests_reg.
       These are named *_reg, and 100 of them are invoked together
       (alltests_reg).  The regression test framework has been
       standardized, and regresstion tests are relatively easy
       to write.  See regutils.h for details.

   (3) Other regression tests, some of which have not (yet) been
       put into the framework.  They are also named *_reg.

   (4) Programs that were used to test library functions or auto-generate
       library code.  These are useful for testing the behavior of small
       sets of functions and for providing example code.

6. Sanitizers can be used on all the regression tests in alltests_reg.c.

   First run autogen.sh to generate the configure script
     autogen.sh
   Then run configure to generate the Makefile with the address sanitizer
     ./configure '--disable-shared' '--enable-debug' 'CFLAGS=-D_GLIBCXX_DEBUG -DDEBUG=1 -Wall -pedantic -g -O0 -fsanitize=address,undefined -fstack-protector-strong -ftrapv'
   Make and run all the regression tests
      make check

I/O libraries leptonica is dependent on

   Leptonica is configured to handle image I/O using these external
   libraries: libjpeg, libtiff, libpng, libz, libwebp, libgif, libopenjp2

   These libraries are easy to obtain.  For example, using the
   Debian package manager:
       sudo apt-get install 
   where  = {libpng-dev, libjpeg62-turbo-dev, libtiff5-dev,
                      libwebp-dev, libopenjp2-7-dev, libgif-dev}.

   Leptonica also allows image I/O with bmp and pnm formats, for which
   we provide the serializers (encoders and decoders).  It also
   gives output drivers for wrapping images in PostScript and PDF, which
   in turn use tiffg4, jpeg and flate (i.e., zlib) encoding.  PDF will
   also wrap jpeg2000 images.

   There is a programmatic interface to gnuplot.  To use it, you
   need only the gnuplot executable (suggest version 3.7.2 or later);
   the gnuplot library is not required.

   If you build with automake, libraries on your system will be
   automatically found and used.

   The rest of this section is for building with the static makefiles.
   The entries in environ.h specify which of these libraries to use.
   The default is to link to these four libraries:
      libjpeg.a  (standard jfif jpeg library, version 6b or 7, 8 or 9))
      libtiff.a  (standard Leffler tiff library, version 3.7.4 or later;
      libpng.a   (standard png library, suggest version 1.4.0 or later)
      libz.a     (standard gzip library, suggest version 1.2.3)
                  current non-beta version is 3.8.2)

   These libraries (and their shared versions) should be in /usr/lib.
   (If they're not, you can change the LDFLAGS variable in the makefile.)
   Additionally, for compilation, the following header files are
   assumed to be in /usr/include:
      jpeg:  jconfig.h
      png:   png.h, pngconf.h
      tiff:  tiff.h, tiffio.h

   If for some reason you do not want to link to specific libraries,
   even if you have them, stub files are included for the ten
   different output formats:
        bmp, jpeg, png, pnm, ps, pdf, tiff, gif, webp and jp2.
   For example, if you don't want to include the tiff library,
   in environ.h set:
       #define  HAVE_LIBTIFF   0
   and the stubs will be linked in.

   To read and write webp files:
      (1) Download libwebp from sourceforge
      (2) #define HAVE_LIBWEBP   1  (in environ.h)
      (3) In prog/makefile, edit ALL_LIBS to include -lwebp
      (4) The library will be installed into /usr/local/lib.
          You may need to add that directory to LDFLAGS; or, equivalently,
          add that path to the LD_LIBRARY_PATH environment variable.

   To read and write jpeg2000 files:
      (1) Download libopenjp2, version 2.3, from their distribution.
      (2) #define HAVE_LIBJP2K   1  (in environ.h)
      (2a) If you have version 2.X, X != 3, edit LIBJP2K_HEADER  (in environ.h)
      (3) In prog/makefile, edit ALL_LIBS to include -lopenjp2
      (4) The library will be installed into /usr/local/lib.

   To read and write gif files:
      (1) Download version giflib-5.1.X+ from souceforge
      (2) #define  HAVE_LIBGIF   1  (in environ.h)
      (3) In prog/makefile, edit ALL_LIBS to include -lgif
      (4) The library will be installed into /usr/local/lib.

Generating documentation using doxygen

The source code is set up to allow generation of documentation using doxygen.
To do this:
(1) Download the Debian doxygen package:
     sudo apt-get install doxygen
(2) In the root client directory containing Doxyfile:
     doxygen Doxyfile
The documentation will be generated in a 'doc' subdirectory, accessible
from this file (relative to the root)
    ./doc/html/index.html

Developing with leptonica

You are encouraged to use the static makefiles if you are developing
applications using leptonica.  The following instructions assume
that you are using the static makefiles and customizing environ.h.

1. Automatic generation of prototypes

   The prototypes are automatically generated by the program xtractprotos.
   They can either be put in-line into allheaders.h, or they can be
   written to a file leptprotos.h, which is #included in allheaders.h.
   Note: (1) We supply the former version of allheaders.h.
         (2) all .c files simply include allheaders.h.

   First, make xtractprotos:
       make xtractprotos

   Then to generate the prototypes and make allheaders.h, do one of
   these two things:
       make allheaders  [puts everything into allheaders.h]
       make allprotos   [generates a file leptprotos.h containing the
                         function prototypes, and includes it in allheaders.h]

   Things to note about xtractprotos, assuming that you are developing
   in Leptonica and need to regenerate the prototypes in allheaders.h:

     (1) xtractprotos is part of Leptonica.  You can 'make' it in either
         src or prog (see the makefile).
     (2) You can output the prototypes for any C file to stdout by running:
             xtractprotos      or
             xtractprotos -prestring=[string] 
     (3) The source for xtractprotos has been packaged up into a tar
         containing just the Leptonica files necessary for building it
         in Linux.  The tar file is available at:
             www.leptonica.com/source/xtractlib-1.5.tar.gz

2. Global parameter to enable development and testing

   For security reasons, with the exception of the regression utility
   (regutils.c), leptonica as shipped (starting with 1.77) does not allow:
      * 'system(3)' fork/exec
      * writes to temp files with compiled-in names
   System calls are used either to run gnuplot or display an image on
   the screen.  

   This is enforced with a global parameter, LeptDebugOK, initialized to 0.
   It can be overridden either at compile time by changing the initialization
   (in writefile.c), or at runtime, using setLeptDebugOK().
   The programs in the prog directory, which mostly test functions in
   the library, are not subject to this restriction.

3. GNU runtime functions for stream redirection to memory

   There are two non-standard gnu functions, fmemopen() and open_memstream(),
   that only work on Linux and conveniently allow memory I/O with a file
   stream interface.  This is convenient for compressing and decompressing
   image data to memory rather than to file.  Stubs are provided
   for all these I/O functions.  Default is to enable them; OSX developers
   must disable by setting #define HAVE_FMEMOPEN  0  (in environ.h).
   If these functions are not enabled, raster to compressed data in
   memory is accomplished safely but through a temporary file.
   See 9 for more details on image I/O formats.

   If you're building with the autoconf programs, these two functions are
   automatically enabled if available.

4. Runtime functions not available on all platforms

   Some functions are not available on all systems.  One example of such a
   function is fstatat().  If possible, such functions will be replaced by
   wrappers, stubs or behavioral equivalent functions.  By default, such
   functions are disabled; enable them by setting #define HAVE_FUNC  1 (in
   environ.h).

   If you're building with the autoconf or cmake programs, these functions are
   automatically enabled if available.

5. Typedefs

   A deficiency of C is that no standard has been universally
   adopted for typedefs of the built-in types.  As a result,
   typedef conflicts are common, and cause no end of havoc when
   you try to link different libraries.  If you're lucky, you
   can find an order in which the libraries can be linked
   to avoid these conflicts, but the state of affairs is aggravating.

   The most common typedefs use lower case variables: uint8, int8, ...
   The png library avoids typedef conflicts by altruistically
   appending "png_" to the type names.  Following that approach,
   Leptonica appends "l_" to the type name.  This should avoid
   just about all conflicts.  In the highly unlikely event that it doesn't,
   here's a simple way to change the type declarations throughout
   the Leptonica code:
    (1) customize a file "converttypes.sed" with the following lines:
        /l_uint8/s//YOUR_UINT8_NAME/g
        /l_int8/s//YOUR_INT8_NAME/g
        /l_uint16/s//YOUR_UINT16_NAME/g
        /l_int16/s//YOUR_INT16_NAME/g
        /l_uint32/s//YOUR_UINT32_NAME/g
        /l_int32/s//YOUR_INT32_NAME/g
        /l_float32/s//YOUR_FLOAT32_NAME/g
        /l_float64/s//YOUR_FLOAT64_NAME/g
    (2) in the src and prog directories:
       - if you have a version of sed that does in-place conversion:
            sed -i -f converttypes.sed *
       - else, do something like (in csh)
           foreach file (*)
           sed -f converttypes.sed $file > tempdir/$file
           end

   If you are using Leptonica with a large code base that typedefs the
   built-in types differently from Leptonica, just edit the typedefs
   in environ.h.  This should have no side-effects with other libraries,
   and no issues should arise with the location in which liblept is
   included.

   For compatibility with 64 bit hardware and compilers, where
   necessary we use the typedefs in stdint.h to specify the pointer
   size (either 4 or 8 byte).

6. Compile and runtime control over stderr output (see environ.h and utils1.c)

   Leptonica provides both compile-time and run-time control over
   messages and debug output (thanks to Dave Bryan).  Both compile-time
   and run-time severity thresholds can be set.  The runtime threshold
   can also be set by an environmental variable.  Messages are
   vararg-formatted and of 3 types: error, warning, informational.
   These are all macros, and can be further suppressed when
   NO_CONSOLE_IO is defined on the compile line.  For production code
   where no output is to go to stderr, compile with -DNO_CONSOLE_IO.

   Runtime redirection of stderr output is also possible, using a
   callback mechanism.  The callback function is registered using
   leptSetStderrHandler().   See utils1.c for details.

7. In-memory raster format (Pix)

   Unlike many other open source packages, Leptonica uses packed
   data for images with all bit/pixel (bpp) depths, allowing us
   to process pixels in parallel.  For example, rasterops works
   on all depths with 32-bit parallel operations throughout.
   Leptonica is also explicitly configured to work on both little-endian
   and big-endian hardware.  RGB image pixels are always stored
   in 32-bit words, and a few special functions are provided for
   scaling and rotation of RGB images that have been optimized by
   making explicit assumptions about the location of the R, G and B
   components in the 32-bit pixel.  In such cases, the restriction
   is documented in the function header.  The in-memory data structure
   used throughout Leptonica to hold the packed data is a Pix,
   which is defined and documented in pix.h.  The alpha component
   in RGB images is significantly better supported, starting in 1.70.

   Additionally, a FPix is provided for handling 2D arrays of floats,
   and a DPix is provided for 2D arrays of doubles.  Converters
   between these and the Pix are given.

8. Conversion between Pix and other in-memory raster formats

 . If you use Leptonica with other imaging libraries, you will need
   functions to convert between the Pix and other image data
   structures.  To make a Pix from other image data structures, you
   will need to understand pixel packing, pixel padding, component
   ordering and byte ordering on raster lines.  See the file pix.h
   for the specification of image data in the pix.

9. Custom memory management

   Leptonica allows you to use custom memory management (allocator,
   deallocator).  For Pix, which tend to be large, the alloc/dealloc
   functions can be set programmatically.  For all other structs and arrays,
   the allocators are specified in environ.h.  Default functions
   are malloc and free.  We have also provided a sample custom
   allocator/deallocator for Pix, in pixalloc.c.

What's in leptonica?

1. Rasterops

   This is a source for a clean, fast implementation of rasterops.
   You can find details starting at the Leptonica home page,
   and also by looking directly at the source code.
   Some of the low-level code is in roplow.c, and an interface is
   given in rop.c to the simple Pix image data structure.

2. Binary morphology

   This is a source for efficient implementations of binary morphology
   Details are found starting at the Leptonica home page, and by reading
   the source code.

   Binary morphology is implemented two ways:

     (a) Successive full image rasterops for arbitrary
         structuring elements (Sels)

     (b) Destination word accumulation (dwa) for specific Sels.
         This code is automatically generated.  See, for example,
         the code in fmorphgen.1.c and fmorphgenlow.1.c.
         These files were generated by running the program
         prog/fmorphautogen.c. Results can be checked by comparing dwa
         and full image rasterops; e.g., prog/fmorphauto_reg.c.

   Method (b) is considerably faster than (a), which is the
   reason we've gone to the effort of supporting the use
   of this method for all Sels.  We also support two different
   boundary conditions for erosion.

   Similarly, dwa code for the general hit-miss transform can
   be auto-generated from an array of hit-miss Sels.
   When prog/fhmtautogen.c is compiled and run, it generates
   the dwa C code in fhmtgen.1.c and fhmtgenlow.1.c.  These
   files can then be compiled into the libraries or into other programs.
   Results can be checked by comparing dwa and rasterop results;
   e.g., prog/fhmtauto_reg.c

   Several functions with simple parsers are provided to execute a
   sequence of morphological operations (plus binary rank reduction
   and replicative expansion).  See morphseq.c.

   The structuring element is represented by a simple Sel data structure
   defined in morph.h.  We provide (at least) seven ways to generate
   Sels in sel1.c, and several simple methods to generate hit-miss
   Sels for pattern finding in selgen.c.

   In use, the most common morphological Sels are separable bricks,
   of dimension n x m (where either n or m, but not both, is commonly 1).
   Accordingly, we provide separable morphological operations on brick
   Sels, using for binary both rasterops and dwa.  Parsers are provided
   for a sequence of separable binary (rasterop and dwa) and grayscale
   brick morphological operations, in morphseq.c.  The main
   advantage in using the parsers is that you don't have to create
   and destroy Sels, or do any of the intermediate image bookkeeping.

   We also give composable separable brick functions for binary images,
   for both rasterop and dwa.  These decompose each of the linear
   operations into a sequence of two operations at different scales,
   reducing the operation count to a sum of decomposition factors,
   rather than the (un-decomposed) product of factors.
   As always, parsers are provided for a sequence of such operations.

3. Grayscale morphology and rank order filters

   We give an efficient implementation of grayscale morphology for brick
   Sels.  See the Leptonica home page and the source code.

   Brick Sels are separable into linear horizontal and vertical elements.
   We use the van Herk/Gil-Werman algorithm, that performs the calculations
   in a time that is independent of the size of the Sels.  Implementations
   of tophat and hdome are also given.

   We also provide grayscale rank order filters for brick filters.
   The rank order filter is a generalization of grayscale morphology,
   that selects the rank-valued pixel (rather than the min or max).
   A color rank order filter applies the grayscale rank operation
   independently to each of the (r,g,b) components.

4. Image scaling

   Leptonica provides many simple and relatively efficient
   implementations of image scaling.  Some of them are listed here;
   for the full set see the web page and the source code.

   Grayscale and color images are scaled using:
      - sampling
      - lowpass filtering followed by sampling,
      - area mapping
      - linear interpolation

   Scaling operations with antialiased sampling, area mapping,
   and linear interpolation are limited to 2, 4 and 8 bpp gray,
   24 bpp full RGB color, and 2, 4 and 8 bpp colormapped
   (bpp == bits/pixel).  Scaling operations with simple sampling
   can be done at 1, 2, 4, 8, 16 and 32 bpp.  Linear interpolation
   is slower but gives better results, especially for upsampling.
   For moderate downsampling, best results are obtained with area
   mapping scaling.  With very high downsampling, either area mapping
   or antialias sampling (lowpass filter followed by sampling) give
   good results.  Fast area map with power-of-2 reduction are also
   provided.  Optional sharpening after resampling is provided to
   improve appearance by reducing the visual effect of averaging
   across sharp boundaries.

   For fast analysis of grayscale and color images, it is useful to
   have integer subsampling combined with pixel depth reduction.
   RGB color images can thus be converted to low-resolution
   grayscale and binary images.

   For binary scaling, the dest pixel can be selected from the
   closest corresponding source pixel.  For the special case of
   power-of-2 binary reduction, low-pass rank-order filtering can be
   done in advance.  Isotropic integer expansion is done by pixel replication.

   We also provide 2x, 3x, 4x, 6x, 8x, and 16x scale-to-gray reduction
   on binary images, to produce high quality reduced grayscale images.
   These are integrated into a scale-to-gray function with arbitrary
   reduction.

   Conversely, we have special 2x and 4x scale-to-binary expansion
   on grayscale images, using linear interpolation on grayscale
   raster line buffers followed by either thresholding or dithering.

   There are also image depth converters that don't have scaling,
   such as unpacking operations from 1 bpp to grayscale, and
   thresholding and dithering operations from grayscale to 1, 2 and 4 bpp.

5. Image shear and rotation (and affine, projective, ...)

   Image shear is implemented with both rasterops and linear interpolation.
   The rasterop implementation is faster and has no constraints on image
   depth.  We provide horizontal and vertical shearing about an
   arbitrary point (really, a line), both in-place and from source to dest.
   The interpolated shear is used on 8 bpp and 32 bpp images, and
   gives a smoother result.  Shear is used for the fastest implementations
   of rotation.

   There are three different types of general image rotators:

     a.  Grayscale rotation using area mapping
         - pixRotateAM() for 8 bit gray and 24 bit color, about center
         - pixRotateAMCorner() for 8 bit gray, about image UL corner
         - pixRotateAMColorFast() for faster 24 bit color, about center

     b.  Rotation of an image of arbitrary bit depth, using
         either 2 or 3 shears.  These rotations can be done
         about an arbitrary point, and they can be either
         from source to dest or in-place; e.g.
         - pixRotateShear()
         - pixRotateShearIP()

     c.  Rotation by sampling.  This can be used on images of arbitrary
         depth, and done about an arbitrary point.  Colormaps are retained.

   The area mapping rotations are slower and more accurate, because each
   new pixel is composed using an average of four neighboring pixels
   in the original image; this is sometimes also also called "antialiasing".
   Very fast color area mapping rotation is provided.

   The shear rotations are much faster, and work on images of arbitrary
   pixel depth, but they just move pixels around without doing any averaging.
   The pixRotateShearIP() operates on the image in-place.

   We also provide orthogonal rotators (90, 180, 270 degree; left-right
   flip and top-bottom flip) for arbitrary image depth.
   And we provide implementations of affine, projective and bilinear
   transforms, with both sampling (for speed) and interpolation
   (for antialiasing).

6. Sequential algorithms

   We provide a number of fast sequential algorithms, including
   binary and grayscale seedfill, and the distance function for
   a binary image.  The most efficient binary seedfill is
   pixSeedfill(), which uses Luc Vincent's algorithm to iterate
   raster- and antiraster-ordered propagation, and can be used
   for either 4- or 8-connected fills.  Similar raster/antiraster
   sequential algorithms are used to generate a distance map from
   a binary image, and for grayscale seedfill.  We also use Heckbert's
   stack-based filling algorithm for identifying 4- and 8-connected
   components in a binary image.  A fast implementation of the
   watershed transform, using a priority queue, is included.

7. Image enhancement

   Some simple image enhancement routines for grayscale and color
   images have been provided.  These include intensity mapping with
   gamma correction and contrast enhancement, histogram equalization,
   edge sharpening, smoothing, and various color-shifting operations.

8. Convolution and cousins

   A number of standard image processing operations are also
   included, such as block convolution, binary block rank filtering,
   grayscale and rgb rank order filtering, and edge and local
   minimum/maximum extraction.   Generic convolution is included,
   for both separable and non-separable kernels, using float arrays
   in the Pix.  Two implementations are included for grayscale and
   color bilateral filtering: a straightforward (slow) one, and a
   fast, approximate, separable one.

9. Image I/O

   Some facilities have been provided for image input and output.
   This is of course required to build executables that handle images,
   and many examples of such programs, most of which are for
   testing, can be built in the prog directory.  Functions have been
   provided to allow reading and writing of files in JPEG, PNG,
   TIFF, BMP, PNM ,GIF, WEBP and JP2 formats.  These formats were chosen
   for the following reasons:

    - JFIF JPEG is the standard method for lossy compression
      of grayscale and color images.  It is supported natively
      in all browsers, and uses a good open source compression
      library.  Decompression is supported by the rasterizers
      in PS and PDF, for level 2 and above.  It has a progressive
      mode that compresses about 10% better than standard, but
      is considerably slower to decompress.  See jpegio.c.

    - PNG is the standard method for lossless compression
      of binary, grayscale and color images.  It is supported
      natively in all browsers, and uses a good open source
      compression library (zlib).  It is superior in almost every
      respect to GIF (which, until recently, contained proprietary
      LZW compression).  See pngio.c.

    - TIFF is a common interchange format, which supports different
      depths, colormaps, etc., and also has a relatively good and
      widely used binary compression format (CCITT Group 4).
      Decompression of G4 is supported by rasterizers in PS and PDF,
      level 2 and above.  G4 compresses better than PNG for most
      text and line art images, but it does quite poorly for halftones.
      It has good and stable support by Leffler's open source library,
      which is clean and small.  Tiff also supports multipage
      images through a directory structure.  Note: because jpeg is
      a supported tiff compression mode, leptonica requires linking
      both libtiff and libjpeg to read and write tiff.  See tiffio.c

    - BMP has (until recently) had no compression.  It is a simple
      format with colormaps that requires no external libraries.
      It is commonly used because it is a Microsoft standard,
      but has little besides simplicity to recommend it.  See bmpio.c.

    - PNM is a very simple, old format that still has surprisingly
      wide use in the image processing community.  It does not
      support compression or colormaps, but it does support binary,
      grayscale and rgb images.  Like BMP, the implementation
      is simple and requires no external libraries.  See pnmio.c.

    - WEBP is a new wavelet encoding method derived from libvpx,
      a video compression library.  It is rapidly growing in acceptance,
      and is supported natively in several browsers.  Leptonica provides
      an interface through webp into the underlying codec.  You need
      to download libwebp.  See webpio.c.

    - JP2 (jpeg2000) is a wavelet encoding method, that has clear
      advantages over jpeg in compression and quality (especially when
      the image has sharp edges, such as scanned documents), but is
      only slowly growing in acceptance.  For it to be widely supported,
      it will require support on a major browser (as with webp).
      Leptonica provides an interface through openjpeg into the underlying
      codec.  You need to download libopenjp2, version 2.X.  See jp2kio.c.

    - GIF is still widely used in the world.  With the expiration
      of the LZW patent, it is practical to add support for GIF files.
      The open source gif library is relatively incomplete and
      unsupported (because of the Sperry-Rand-Burroughs-Univac
      patent history).  Leptonica supports versions 5.1+.  See gifio.c.

   Here's a summary of compression support and limitations:
      - All formats except JPEG, WEBP and JP2K support 1 bpp binary.
      - All formats support 8 bpp grayscale (GIF must have a colormap).
      - All formats except GIF support rgb color.
      - All formats except PNM, JPEG, WEBP and JP2K support 8 bpp colormap.
      - PNG and PNM support 2 and 4 bpp images.
      - PNG supports 2 and 4 bpp colormap, and 16 bpp without colormap.
      - PNG, JPEG, TIFF, WEBP, JP2K and GIF support image compression;
        PNM and BMP do not.
      - WEBP supports rgb color and rgba.
      - JP2 supports 8 bpp grayscale, rgb color and rgba.
   Use prog/ioformats_reg for a regression test on all formats, including
   thorough testing on TIFF.
   For more thorough testing on other formats, use:
      - prog/pngio_reg for PNG.
      - prog/gifio_reg for GIF
      - prog/webpio_reg for WEBP
      - prog/jp2kio_reg for JP2

   We provide generators for PS output, from all types of input images.
   The output can be either uncompressed or compressed with level 2
   (ccittg4 or dct) or level 3 (flate) encoding.  You have flexibility
   for scaling and placing of images, and for printing at different
   resolutions.  You can also compose mixed raster (text, image) PS.
   See psio1.c for examples of how to output PS for different applications.
   As examples of usage, see:
     * prog/converttops.c for a general image --> PS conversion
           for printing.  You can specify the PS compression level (1, 2, or 3).
     * prog/imagetops.c for a special image --> PS conversion
           for printing at maximum size on 8.5 x 11 paper.  You can
           specify the PS compression level (1, 2, or 3).
     * prog/convertfilestops.c to generate a multipage level 3 compressed
           PS file that can then be converted to pdf with ps2pdf.
     * prog/convertsegfilestops.c to generate a multipage, mixed raster,
           level 2 compressed PS file.

   We provide generators for PDF output, again from all types of input
   images, and with ccittg4, dct, flate and jpx (jpeg2000) compression.
   You can do the following for PDF:
     * Put any number of images onto a page, with specified input
       resolution, location and compression.
     * Write a mixed raster PDF, given an input image and a segmentation
       mask.  Non-image regions are written in G4 (fax) encoding.
     * Concatenate single-page PDF wrapped images into a single PDF file.
     * Build a PDF file of all images in a directory or array of file names.
   As examples of usage, see:
     * prog/converttopdf.c: fast pdf generation with one image/page.
       For speed, this avoids transcoding whenever possible.
     * prog/convertfilestopdf.c: more flexibility in the output.  You
       can set the resolution, scaling, encoding type and jpeg quality.
     * prog/convertsegfilestopdf.c: generates a multipage, mixed raster pdf,
       with separate controls for compressing text and non-text regions.

   Note: any or all of these I/O library calls can be stubbed out at
         compile time, using the environment variables in environ.h.

   For all formatted reads and writes, we support read from memory
   and write to memory.  The gnu C runtime library (glibc) supports
   two I/O functions, open_memstream() and fmemopen(), which read
   and write directly to memory as if writing to a file stream.
     * On all platforms, leptonica supports direct read/write with memory
       for TIFF, PNG, BMP, GIF and WEBP formats.
     * On linux, leptonica uses the special gnu libraries to enable
       direct read/write with memory for JPEG, PNM and JP2.
     * On platforms without the gnu libraries, such as OSX, Windows
       and Solaris, read/write with memory for JPEG, PNM and JP2 goes
       through temp files.
   To enable/disable memory I/O for image read/write, see environ.h.

   We also provide fast serialization and deserialization between a pix
   in memory and a file (spixio.c).  This works on all types of pix images.

10. Colormap removal and color quantization

   Leptonica provides functions that remove colormaps, for conversion
   to either 8 bpp gray or 24 bpp RGB.  It also provides the inverse
   function to colormap removal; namely, color quantization
   from 24 bpp full color to 8 bpp colormap with some number
   of colormap colors.  Several versions are provided, some that
   use a fast octree vector quantizer and others that use
   a variation of the median cut quantizer.  For high-level interfaces,
   see for example: pixConvertRGBToColormap(), pixOctreeColorQuant(),
   pixOctreeQuantByPopulation(), pixFixedOctcubeQuant256(),
   and pixMedianCutQuant().

11. Programmatic image display

   For debugging, pixDisplay() and pixDisplayWithTitle() in writefile.c
   can be called to display an image using one of several display
   programs (xzgv, xli, xv, l_view).  If necessary to fit on the screen,
   the image is reduced in size, with 1 bpp images being converted
   to grayscale for readability.  Another common debug method is to
   accumulate intermediate images in a pixa, and then either view these
   as a mosaic (using functions in pixafunc2.c), or gather them into
   a compressed PDF or PostScript file for viewing with evince.  Common
   image display programs are: xzgv, xli, xv, display, gthumb, gqview,
   evince, gv and acroread.

12. Document image analysis

   Many functions have been included specifically to help with
   document image analysis.  These include skew and text orientation
   detection; page segmentation; baseline finding for text;
   unsupervised classification of connected components, characters
   and words; dewarping camera images; adaptive binarization; and
   a simple book-adaptive classifier for various character sets,
   segmentation for newspaper articles, etc.

13. Data structures

   Several simple data structures are provided for safe and efficient handling
   of arrays of numbers, strings, pointers, and bytes.  The generic
   pointer array is implemented in four ways: as a stack, a queue,
   a heap (used to implement a priority queue), and an array with
   insertion and deletion, from which the stack operations form a subset.
   Byte arrays are implemented both as a wrapper around the actual
   array and as a queue.  The string arrays are particularly useful
   for both parsing and composing text.  Generic lists with
   doubly-linked cons cells are also provided.  Other data structures
   are provided for handling ordered sets and maps, as well as hash sets
   and hash maps.

14. Examples of programs that are easily built using the library:

    - for plotting x-y data, we give a programmatic interface
      to the gnuplot program, with output to X11, png, ps or eps.
      We also allow serialization of the plot data, in a form
      such that the data can be read, the commands generated,
      and (finally) the plot constructed by running gnuplot.

    - a simple jbig2-type classifier, using various distance
      metrics between image components (correlation, rank
      hausdorff); see prog/jbcorrelation.c, prog/jbrankhaus.c.

    - a simple color segmenter, giving a smoothed image
      with a small number of the most significant colors.

    - a program for converting all images in a directory
      to a PostScript file, and a program for printing an image
      in any (supported) format to a PostScript printer.

    - various programs for generating pdf files from compressed
      images, including very fast ones that don't scale and
      avoid transcoding if possible.

    - converters between binary images and SVG format.

    - an adaptive recognition utility for training and identifying
      text characters in a multipage document such as a book.

    - a bitmap font facility that allows painting text onto
      images.  We currently support one font in several sizes.
      The font images and postscript programs for generating
      them are stored in prog/fonts/, and also as compiled strings
      in bmfdata.h.

    - a binary maze game lets you generate mazes and find shortest
      paths between two arbitrary points, if such a path exists.
      You can also compute the "shortest" (i.e., least cost) path
      between points on a grayscale image.

    - a 1D barcode reader.  This is still in an early stage of development,
      with little testing, and it only decodes 6 formats.

    - a utility that will dewarp images of text that were captured
      with a camera at close range.

    - a sudoku solver and generator, including a good test for uniqueness

    - see (13, above) for other document image applications.

15. JBig2 encoder

   Leptonica supports an open source jbig2 encoder (yes, there is one!),
   which can be downloaded from:
       http://www.imperialviolet.org/jbig2.html.
   To build the encoder, use the most recent version.  This bundles
   Leptonica 1.63.  Once you've built the encoder, use it to compress
   a set of input image files:  (e.g.)
       ./jbig2 -v -s   >   
   You can also generate a pdf wrapping for the output jbig2.  To do that,
   call jbig2 with the -p arg, which generates a symbol file (output.sym)
   plus a set of location files for each input image (output.0000, ...):
        ./jbig2 -v -s -p 
   and then generate the pdf:
       python pdf.py output  >  
   See the usage documentation for the jbig2 compressor at:
       http://www.imperialviolet.org/binary/jbig2enc.html
   You can uncompress the jbig2 files using jbig2dec, which can be
   downloaded and built from:
       http://jbig2dec.sourceforge.net/

16. Versions

   New versions of the Leptonica library are released several times
   a year, and version numbers are provided for each release in the
   following files:
       src/makefile.static
       CMakeLists.txt
       configure.ac
       allheaders_top.txt  (and propagated to allheaders.h)
   All even versions from 1.42 to 1.60 were originally archived at
   http://code.google.com/p/leptonica, as well as all versions after 1.60.
   These have now been transferred by Egor Pugin to github:
       github.com/danbloomberg/leptonica
   where all releases (1.42 - 1.85.0) are available; e.g.,
       https://github.com/DanBloomberg/leptonica/releases/tag/1.85.0
   The more recent releases, from 1.80, are also available at
       leptonica.org/download.html
   Note that if you are downloading from github, the releases are more
   likely to be stable than the master.  Also, if you download from
   the master and use autotools (e.g., Makefile.am), you must first run
   autogen.sh to generate the configure program and the Makefiles.

   The number of downloads of leptonica increased by nearly an order
   of magnitude with 1.69, due to bundling with tesseract and
   incorporation in ubuntu 12-04.  Jeff Breidenbach has built all
   the Debian releases, which require release version numbers.
   The Debian binary release versions to date are:
        1.69 : 3.0.0
        1.70 : 4.0.0
        1.71 : 4.2.0
        1.72 : 4.3.0
        1.73 : 5.0.0
        1.74 : 5.1.0
        1.75 : 5.2.0
        1.76 : 5.3.0
        1.77 : 5.3.0
        1.78 : 5.3.0
        1.79 : 5.4.0
        1.80 : 5.4.0
        1.81 : 5.4.0
        1.82 : 5.4.0
        1.83 : 6.0.0
        1.84 : 6.0.0
        1.85 : 6.0.0   (in progress)

   A brief version chronology is maintained in version-notes.html.
   Starting with gcc 4.3.3, error warnings (-Werror) are given for
   minor infractions like not checking return values of built-in C
   functions.  I have attempted to eliminate these warnings.
   In any event, you will see warnings with the -Wall flag.