Skip to content

Commit

Permalink
Convert CHANGELOG to rst format and include it in documentation. Fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
MathMagique committed Jun 4, 2017
1 parent 1719453 commit 1df198b
Show file tree
Hide file tree
Showing 7 changed files with 101 additions and 83 deletions.
174 changes: 95 additions & 79 deletions CHANGELOG.md → CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,16 +1,20 @@
Version history / changelog
===========================

From version 2.0.0, turbodbc adapts semantic versioning.

Version 2.0.0 (unreleased)
==========================
* Initial support for the arrow data format with the `Cursor.fetchallarrow()`
method. Still in alpha stage, mileage may vary. Big thanks to @xhochy
* `prefer_unicode` option now also affects column name rendering
--------------------------
* Initial support for the arrow data format with the ``Cursor.fetchallarrow()``
method. Still in alpha stage, mileage may vary (Windows not yet supported,
UTF-16 unicode not yet supported). Big thanks to @xhochy!
* ``prefer_unicode`` option now also affects column name rendering
when gathering results from the database. This effectively enables
support for Unicode column names for some databases.


Versions not conforming to semantic versioning
==============================================
Earlier versions (not conforming to semantic versioning)
--------------------------------------------------------

The following versions do not conform to semantic versioning. The
meaning of the major.minor.revision versions is:
Expand All @@ -19,94 +23,101 @@ meaning of the major.minor.revision versions is:
* Revision: If incremented, indicates non-breaking change (either feature or bug fix)

Version 1.1.2
=============
* Added `autocommit` as a keyword argument to `make_options()`. As the
name suggests, this allows you to enable automatic `COMMIT` operations
-------------

* Added ``autocommit`` as a keyword argument to ``make_options()``. As the
name suggests, this allows you to enable automatic ``COMMIT`` operations
after each operation. It also improves compatibility with databases
that do not support transactions.
* Added `autocommit` property to `Connection` class that allows switching
* Added ``autocommit`` property to ``Connection`` class that allows switching
autocommit mode after the connection was created.
* Fixed bug with `cursor.rowcount` not being reset to `-1` when calls to
`execute()` or `executemany()` raised exceptions.
* Fixed bug with `cursor.rowcount` not showing the correct value when
* Fixed bug with ``cursor.rowcount`` not being reset to ``-1`` when calls to
``execute()`` or ``executemany()`` raised exceptions.
* Fixed bug with ``cursor.rowcount`` not showing the correct value when
manipulating queries were used without placeholders, i.e., with
parameters baked into the query.
* Global interpreter lock (GIL) is released during some operations to
facilitate basic multi-threading (thanks @chmp)
* Internal: The return code `SQL_SUCCESS_WITH_INFO` is now treated as
* Internal: The return code ``SQL_SUCCESS_WITH_INFO`` is now treated as
a success instead of an error when allocating environment, connection,
and statement handles. This may improve compatibility with some databases.

Version 1.1.1
=============
-------------

* Windows is now _officially_ supported (64 bit, Python 3.5 and 3.6). From now on,
code is automatically compiled and tested on Linux, OSX, and Windows
(thanks @TWAC for support). Windows binary wheels are uploaded to pypi.
* Added supported for fetching results in batches of NumPy objects with
`cursor.fetchnumpybatches()` (thanks @yaxxie)
``cursor.fetchnumpybatches()`` (thanks @yaxxie)
* MSSQL is now part of the Windows test suite (thanks @TWAC)
* `connect()` now allows to specify a `connection_string` instead of
* ``connect()`` now allows to specify a ``connection_string`` instead of
individual arguments that are then compiles into a connection string (thanks @TWAC).

Version 1.1.0
=============
-------------

* Added support for databases that require Unicode data to be transported
in UCS-2/UCS-16 format rather than UTF-8, e.g., MSSQL.
* Added _experimental_ support for Windows source distribution builds.
Windows builds are not fully (or automatically) tested yet, and still require
significant effort on the user side to compile (thanks @TWAC for this initial version)
* Added new `cursor.fetchnumpybatches()` method which returns a generator to
* Added new ``cursor.fetchnumpybatches()`` method which returns a generator to
iterate over result sets in batch sizes as defined by buffer size or rowcount
(thanks @yaxxie)
* Added `make_options()` function that take all performance and compatibility
* Added ``make_options()`` function that take all performance and compatibility
settings as keyword arguments.
* Deprecated all performance options (`read_buffer_size`, `use_async_io`, and
`parameter_sets_to_buffer`) for `connect()`. Please move these keyword arguments
to `make_options()`. Then, set `connect{}`'s new keyword argument `turbodbc_options`
to the result of `make_options()`. This effectively separates performance options
* Deprecated all performance options (``read_buffer_size``, ``use_async_io``, and
``parameter_sets_to_buffer``) for ``connect()``. Please move these keyword arguments
to ``make_options()``. Then, set ``connect{}``'s new keyword argument ``turbodbc_options``
to the result of ``make_options()``. This effectively separates performance options
from options passed to the ODBC connection string.
* Removed deprecated option `rows_to_buffer` from `turbodbc.connect()`
* Removed deprecated option ``rows_to_buffer`` from ``turbodbc.connect()``
(see version 0.4.1 for details).
* The order of arguments for `turbodbc.connect()` has changed; this may affect
* The order of arguments for ``turbodbc.connect()`` has changed; this may affect
you if you have not used keyword arguments.
* The behavior of `cursor.fetchallnumpy()` has changed a little. The
`mask` attribute of a generated `numpy.MaskedArray` instance is
shortened to `False` from the previous `[False, ..., False]` if the
mask is `False` for all entries. This can cause problems when you
* The behavior of ``cursor.fetchallnumpy()`` has changed a little. The
``mask`` attribute of a generated ``numpy.MaskedArray`` instance is
shortened to ``False`` from the previous ``[False, ..., False]`` if the
mask is ``False`` for all entries. This can cause problems when you
access individual indices of the mask.
* Updated `pybind11` requirement to at least `2.1.0`.
* Updated ``pybind11`` requirement to at least ``2.1.0``.
* Internal: Some types have changed to accomodate for Linux/OSX/Windows compatibility.
In particular, a few `long` types were converted to `intptr_t` and `int64_t`
where appropriate. In particular, this affects the `field` type that may be used
In particular, a few ``long`` types were converted to ``intptr_t`` and ``int64_t``
where appropriate. In particular, this affects the ``field`` type that may be used
by C++ end users (so they exist).


Version 1.0.5
=============
* Internal: Remove some `const` pointers to resolve some compile issues with
-------------

* Internal: Remove some ``const`` pointers to resolve some compile issues with
xcode 6.4 (thanks @xhochy)

Version 1.0.4
=============
-------------

* Added possibility to set unixodbc include and library directories in
setup.py. Required for conda builds.

Version 1.0.3
=============
-------------

* Improved compatibility with ODBC drivers (e.g. FreeTDS) that do not
support ODBC's `SQLDescribeParam()` function by using a default
support ODBC's ``SQLDescribeParam()`` function by using a default
parameter type.
* Used a default parameter type when the ODBC driver cannot determine
a parameter's type, for example when using column expressions for
`INSERT` statements.
``INSERT`` statements.
* Improved compatibility with some ODBC drivers (e.g. Microsoft's official
MSSQL ODBC driver) for setting timestamps with fractional seconds.

Version 1.0.2
=============
* Added support for chaining operations to `Cursor.execute()` and
`Cursor.executemany()`. This allows one-liners such as
`cursor.execute("SELECT 42").fetchallnumpy()`.
-------------

* Added support for chaining operations to ``Cursor.execute()`` and
``Cursor.executemany()``. This allows one-liners such as
``cursor.execute("SELECT 42").fetchallnumpy()``.
* Right before a database connection is closed, any open transactions
are explicitly rolled back. This improves compatibility with ODBC drivers
that do not perform automatic rollbacks such as Microsoft's official
Expand All @@ -117,63 +128,68 @@ Version 1.0.2
* Source distribution now contains license, readme, and changelog.

Version 1.0.1
=============
-------------

* Added support for OSX

Version 1.0.0
=============
-------------

* Added support for Python 3. Python 2 is still supported as well.
Tested with Python 2.7, 3.4, 3.5, and 3.6.
* Added `six` package as dependency
* Added ``six`` package as dependency
* Turbodbc uses pybind11 instead of Boost.Python to generate its Python
bindings. pybind11 is available as a Python package and automatically
installed when you install turbodbc.
Other boost libraries are still required for other aspects of the code.
* A more modern compiler is required due to the pybind11 dependency.
GCC 4.8 will suffice.
* Internal: Move remaining stuff depending on python to turbodbc_python
* Internal: Now requires cmake 2.8.12+ (get it with `pip install cmake`)
* Internal: Now requires cmake 2.8.12+ (get it with ``pip install cmake``)

Version 0.5.1
=============
-------------

* Fixed build issue with older numpy versions, e.g., 1.8 (thanks @xhochy)

Version 0.5.0
=============
-------------

* Improved performance of parameter-based operations.
* Internal: Major modifications to the way parameters are handled.

Version 0.4.1
=============
-------------

* The size of the input buffers for retrieving result sets can now be set
to a certain amount of memory instead of using a fixed number of rows.
Use the optional `read_buffer_size` parameter of `turbodbc.connect()` and
set it to instances of the new top-level classes `Megabytes` and `Rows`
Use the optional ``read_buffer_size`` parameter of ``turbodbc.connect()`` and
set it to instances of the new top-level classes ``Megabytes`` and ``Rows``
(thanks @LukasDistel).
* The read buffer size's default value has changed from 1,000 rows to
20 MB.
* The parameter `rows_to_buffer` of `turbodbc.connect()` is _deprecated_.
You can set the `read_buffer_size` to `turbodbc.Rows(1000)` for the same
* The parameter ``rows_to_buffer`` of ``turbodbc.connect()`` is _deprecated_.
You can set the ``read_buffer_size`` to ``turbodbc.Rows(1000)`` for the same
effect, though it is recommended to specify the buffer size in MB.
* Internal: Libraries no longer link `libpython.so` for local development
* Internal: Libraries no longer link ``libpython.so`` for local development
(linking is already done by the Python interpreter). This was always
the case for the libraries in the packages uploaded to PyPI, so no
change was necessary here.
* Internal: Some modifications to the structure of the underlying
C++ code.

Version 0.4.0
=============
-------------

* NumPy support is introduced to turbodbc for retrieving result sets.
Use `cursor.fetchallnumpy` to retrieve a result set as an `OrderedDict`
of `column_name: column_data` pairs, where `column_data` is a NumPy `MaskedArray`
Use ``cursor.fetchallnumpy`` to retrieve a result set as an ``OrderedDict``
of ``column_name: column_data`` pairs, where ``column_data`` is a NumPy ``MaskedArray``
of appropriate type.
* Internal: Single `turbodbc_intern` library was split up into three libraries
* Internal: Single ``turbodbc_intern`` library was split up into three libraries
to keep NumPy support optional. A few files were moved because of this.

Version 0.3.0
=============
-------------

* turbodbc now supports asynchronous I/O operations for retrieving result sets.
This means that while the main thread is busy converting an already retrieved
Expand All @@ -183,61 +199,61 @@ Version 0.3.0
as they are converted to Python objects).

Ansynchronous I/O support is experimental. Enable it with
`turbodbc.connect('My data source name', use_async_io=True)`
``turbodbc.connect('My data source name', use_async_io=True)``

Version 0.2.5
=============
-------------

* C++ backend: `turbodbc::column` no longer automatically binds on
construction. Call `bind()` instead.
* C++ backend: ``turbodbc::column`` no longer automatically binds on
construction. Call ``bind()`` instead.

Version 0.2.4
=============
-------------

* Result set rows are returned as native Python lists instead of a not easily
printable custom type.
* Improve performance of Python object conversion while reading result sets.
In tests with an Exasol database, performance got about 15% better.
* C++ backend: `turbodbc::cursor` no longer allows direct access to the C++
`field` type. Instead, please use the `cursor`'s `get_query()` method,
and construct a `turbodbc::result_sets::field_result_set` using the
`get_results()` method.
* C++ backend: ``turbodbc::cursor`` no longer allows direct access to the C++
``field`` type. Instead, please use the ``cursor``'s ``get_query()`` method,
and construct a ``turbodbc::result_sets::field_result_set`` using the
``get_results()`` method.

Version 0.2.3
=============
-------------

* Fix issue that only lists were allowed for specifying parameters for queries
* Improve parameter memory consumption when the database reports very large
string parameter sizes
* C++ backend: Provides more low-level ways to access the result set

Version 0.2.2
=============
-------------

* Fix issue that `dsn` parameter was always present in the connection string
even if it was not set by the user's call to `connect()`
* Fix issue that ``dsn`` parameter was always present in the connection string
even if it was not set by the user's call to ``connect()``
* Internal: First version to run on Travis.
* Internal: Use pytest instead of unittest for testing
* Internal: Allow for integration tests to run in custom environment
* Internal: Simplify integration test configuration


Version 0.2.1
=============
-------------

* Internal: Change C++ test framework to Google Test


Version 0.2.0
=============
-------------

* New parameter types supported: `bool`, `datetime.date`, `datetime.datetime`
* `cursor.rowcount` returns number of affected rows for manipulating queries
* `Connection` supports `rollback()`
* New parameter types supported: ``bool``, ``datetime.date``, ``datetime.datetime``
* ``cursor.rowcount`` returns number of affected rows for manipulating queries
* ``Connection`` supports ``rollback()``
* Improved handling of string parameters


Version 0.1.0
=============
-------------

Initial release
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@ if(BUILD_TESTING)
endif()

install(
FILES setup.py setup.cfg MANIFEST.in Makefile LICENSE README.md CHANGELOG.md
FILES setup.py setup.cfg MANIFEST.in Makefile LICENSE README.md CHANGELOG.rst
DESTINATION .
)
2 changes: 1 addition & 1 deletion MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ global-include *.h
global-include *.cpp
include LICENSE
include README.md
include CHANGELOG.md
include CHANGELOG.rst
2 changes: 1 addition & 1 deletion contrib/copy-to-sdist.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def main():
for afile in ["setup.cfg",
"setup.py",
"README.md",
"CHANGELOG.md",
"CHANGELOG.rst",
"MANIFEST.in",
"LICENSE",
]:
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ and `MSSQL <http://microsoft.com/sql>`_, but probably also works with your datab
pages/advanced_usage
pages/odbc_configuration
pages/databases
pages/changelog
pages/troubleshooting
pages/faq
pages/contributing
Expand Down
1 change: 1 addition & 0 deletions docs/pages/changelog.rst
2 changes: 1 addition & 1 deletion docs/pages/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ There are a few options:
* Watch the turbodbc project on `GitHub`_. This way, you will get mails for new issues,
updates issues, and the like.
* Periodically read turbodbc's
`change log <https://github.com/blue-yonder/turbodbc/blob/master/CHANGELOG.md>`_
`change log <https://github.com/blue-yonder/turbodbc/blob/master/CHANGELOG.rst>`_
* Follow `@turbodbc <https://twitter.com/turbodbc>`_ on Twitter. There will be tweets
for new releases.

Expand Down

0 comments on commit 1df198b

Please sign in to comment.