aboutsummaryrefslogtreecommitdiff
path: root/doc/manual.xhtml
diff options
context:
space:
mode:
Diffstat (limited to 'doc/manual.xhtml')
-rw-r--r--doc/manual.xhtml174
1 files changed, 116 insertions, 58 deletions
diff --git a/doc/manual.xhtml b/doc/manual.xhtml
index 5bd49bd..a308758 100644
--- a/doc/manual.xhtml
+++ b/doc/manual.xhtml
@@ -538,6 +538,7 @@ for consistency.
<tr><th>14.1.13</th><td><a href="#14.1.13"><code>sectionable</code></a></td></tr>
<tr><th>14.1.14</th><td><a href="#14.1.14"><code>deleted</code></a></td></tr>
<tr><th>14.1.15</th><td><a href="#14.1.15"><code>bulk</code></a></td></tr>
+ <tr><th>14.1.16</th><td><a href="#14.1.16"><code>options</code></a></td></tr>
</table>
</td>
</tr>
@@ -774,6 +775,7 @@ for consistency.
<tr><th>19.5.4</th><td><a href="#19.5.4">Date-Time Format</a></td></tr>
<tr><th>19.5.5</th><td><a href="#19.5.5">Timezones</a></td></tr>
<tr><th>19.5.6</th><td><a href="#19.5.6"><code>NUMERIC</code> Type Support</a></td></tr>
+ <tr><th>19.5.7</th><td><a href="#19.5.7">Bulk Operations Support</a></td></tr>
</table>
</td>
</tr>
@@ -4577,9 +4579,9 @@ namespace odb
<p>The <code>unknown_schema</code> exception is thrown by the
<code>odb::schema_catalog</code> class if a schema with the specified
name is not found. Refer to <a href="#3.4">Section 3.4, "Database"</a>
- for details. The <code>unknown_schema_version</code> exception is
- thrown by the <code>schema_catalog</code> functions that deal with
- database schema evolution if the passed version is unknow. Refer
+ for details. The <code>unknown_schema_version</code> exception is thrown
+ by the <code>schema_catalog</code> functions that deal with database
+ schema evolution if the passed or current version is unknow. Refer
to <a href="#13">Chapter 13, "Database Schema Evolution"</a> for
details.</p>
@@ -5591,11 +5593,11 @@ for (age = 90; age > 40; age -= 10)
template &lt;typename T>
prepared_query&lt;T>
- lookup_query (const char* name) const;
+ lookup_query (const char* name);
template &lt;typename T, typename P>
prepared_query&lt;T>
- lookup_query (const char* name, P*&amp; params) const;
+ lookup_query (const char* name, P*&amp; params);
</pre>
<p>The <code>cache_query()</code> function caches the passed prepared
@@ -5775,6 +5777,11 @@ db.query_factory (
});
</pre>
+ Note that the <code>database::query_factory()</code> function is not
+ thread-safe and should be called before starting any threads that may
+ require this functionality. Normally, all the prepared query factories
+ are registered as part of the database instance creation.
+
<!-- CHAPTER -->
<hr class="page-break"/>
@@ -10890,7 +10897,7 @@ unsigned short v_min = ...
unsigned short l_min = ...
result r (db.query&lt;employee_leave> (
- "vacation_days > " + query::_val(v_min) + "AND"
+ "vacation_days > " + query::_val(v_min) + "AND" +
"sick_leave_days > " + query::_val(l_min)));
t.commit ();
@@ -12840,7 +12847,7 @@ namespace odb
<ol>
<li>The database administrator determines the current database version.
If migration is required, then for each migration step (that
- is, from one version to the next), he performs the following:</li>
+ is, from one version to the next), they perform the following:</li>
<li>Execute the pre-migration file.</li>
@@ -13002,12 +13009,12 @@ t.commit ();
process is directed by an external entity, such as a database
administrator or a script.</p>
- <p>Most <code>schema_catalog</code> functions presented above also
- accept the optional schema name argument. If the passed schema
- name is not found, then the <code>odb::unknown_schema</code> exception
- is thrown. Similarly, functions that accept the schema version
- argument will throw the <code>odb::unknown_schema_version</code> exception
- if the passed version is invalid. Refer to <a href="#3.14">Section
+ <p>Most <code>schema_catalog</code> functions presented above also accept
+ the optional schema name argument. If the passed schema name is not
+ found, then the <code>odb::unknown_schema</code> exception is
+ thrown. Similarly, functions that accept the schema version argument will
+ throw the <code>odb::unknown_schema_version</code> exception if the
+ passed or current version is invalid. Refer to <a href="#3.14">Section
3.14, "ODB Exceptions"</a> for more information on these exceptions.</p>
<p>To illustrate how all these parts fit together, consider the
@@ -14359,6 +14366,12 @@ class person
<td><a href="#14.1.15">14.1.15</a></td>
</tr>
+ <tr>
+ <td><code>options</code></td>
+ <td>database options for a persistent class</td>
+ <td><a href="#14.1.16">14.1.16</a></td>
+ </tr>
+
</table>
<h3><a name="14.1.1">14.1.1 <code>table</code></a></h3>
@@ -15001,6 +15014,39 @@ class employer
is the batch size. For more information on this functionality, refer
to <a href="#15.3">Section 15.3, "Bulk Database Operations"</a>.</p>
+ <h3><a name="14.1.16">14.1.16 <code>options</code></a></h3>
+
+ <p>The <code>options</code> specifier specifies additional table
+ definition options that should be used for the persistent class. For
+ example:</p>
+
+ <pre class="cxx">
+#pragma db object options("PARTITION BY RANGE (age)")
+class person
+{
+ ...
+
+ unsigned short age_;
+};
+ </pre>
+
+ <p>Table definition options for a container table can be specified with
+ the <code>options</code> data member specifier
+ (<a href="#14.4.8">Section 14.4.8, "<code>options</code>"</a>). For
+ example:</p>
+
+ <pre class="cxx">
+#pragma db object
+class person
+{
+ ...
+
+ #pragma db options("PARTITION BY RANGE (index)")
+ std::vector&lt;std::string> aliases_;
+};
+ </pre>
+
+
<h2><a name="14.2">14.2 View Type Pragmas</a></h2>
<p>A pragma with the <code>view</code> qualifier declares a C++ class
@@ -16577,6 +16623,11 @@ class person
};
</pre>
+ <p>Note that if specified for the container member, then instead of the
+ column definition options it specifies the table definition options for
+ the container table (<a href="#14.1.16">Section 14.1.16,
+ "<code>options</code>"</a>).</p>
+
<p>Options can also be specified on the per-type basis
(<a href="#14.3.5">Section 14.3.5, "<code>options</code>"</a>).
By default, options are accumulating. That is, the ODB compiler
@@ -18705,23 +18756,24 @@ class derived: public string_base
<h2><a name="15.3">15.3 Bulk Database Operations</a></h2>
- <p>Some database systems supported by ODB provide a mechanism, often
- called bulk or batch statement execution, that allows us to execute
- the same SQL statement on multiple sets of data at once and with a
- single database API call. This often results in significantly
- better performance if we need to execute the same statement for a
- large number of data sets (thousands to millions).</p>
-
- <p>ODB translates this mechanism to bulk operations which allow
- us to persist, update, or erase a range of objects in the database.
- Currently, from all the database systems supported by ODB, only
- Oracle and Microsoft SQL Server are capable of bulk operations.
- There is also currently no emulation of the bulk API for other
- databases nor dynamic multi-database support. As a result, if
- you are using dynamic multi-database support, you will need to
- "drop down" to static support in order to access the bulk API.
- Refer to <a href="#16">Chapter 16, "Multi-Database Support"</a>
- for details.</p>
+ <p>Some database systems supported by ODB provide a mechanism, often called
+ bulk or batch statement execution, that allows us to execute the same SQL
+ statement on multiple sets of data at once and with a single database API
+ call (or equivalent). This often results in significantly better
+ performance if we need to execute the same statement for a large number
+ of data sets (thousands to millions).</p>
+
+ <p>ODB translates this mechanism to bulk operations which allow us to
+ persist, update, or erase a range of objects in the database. Currently,
+ from all the database systems supported by ODB, only Oracle, Microsoft
+ SQL Server, and PostgreSQL are capable of bulk operations (but
+ see <a href="#19.5.7">Section 19.5.7, "Bulk Operations Support"</a> for
+ PostgreSQL limitations). There is also currently no emulation of the bulk
+ API for other databases nor dynamic multi-database support. As a result,
+ if you are using dynamic multi-database support, you will need to "drop
+ down" to static support in order to access the bulk API. Refer
+ to <a href="#16">Chapter 16, "Multi-Database Support"</a> for
+ details.</p>
<p>As we will discuss later in this section, bulk operations have
complex failure semantics that is dictated by the underlying
@@ -18755,15 +18807,15 @@ class person
</pre>
<p>The single argument to the <code>bulk</code> pragma is the batch
- size. The batch size specifies the maximum number of data sets
- that should be handled with a single underlying statement execution.
- If the range that we want to perform the bulk operation on contains
- more objects than the batch size, then ODB will split this operation
- into multiple underlying statement executions (batches). To illustrate
- this point with an example, suppose we want to persist 53,000 objects
- and the batch size is 5,000. ODB will then execute the statement
- 11 times, the first 10 times with 5,000 data sets each, and the
- last time with the remaining 3,000 data sets.</p>
+ size. The batch size specifies the maximum number of data sets that
+ should be handled with a single underlying statement execution (or
+ equivalent). If the range that we want to perform the bulk operation on
+ contains more objects than the batch size, then ODB will split this
+ operation into multiple underlying statement executions (batches). To
+ illustrate this point with an example, suppose we want to persist 53,000
+ objects and the batch size is 5,000. ODB will then execute the statement
+ 11 times, the first 10 times with 5,000 data sets each, and the last time
+ with the remaining 3,000 data sets.</p>
<p>The commonly used batch sizes are in the 2,000-5,000 range, though
smaller or larger batches could provide better performance,
@@ -18780,7 +18832,7 @@ class person
by using the database prefix, for example:</p>
<pre class="cxx">
-#pragma db object mssql:bulk(3000) oracle:bulk(4000)
+#pragma db object mssql:bulk(3000) oracle:bulk(4000) pgsql:bulk(2000)
class person
{
...
@@ -18911,11 +18963,11 @@ db.erase&lt;person> (ids.begin (), ids.end ());
<p>Conceptually, a bulk operation is equivalent to performing the
corresponding non-bulk version in a loop, except when it comes to the
- failure semantics. Both databases that currently are capable of
- bulk operations (Oracle and SQL Server) do not stop when a data
+ failure semantics. Some databases that currently are capable of bulk
+ operations (specifically, Oracle and SQL Server) do not stop when a data
set in a batch fails (for example, because of a unique constraint
- violation). Instead, they continue executing subsequent data
- sets until every element in the batch has been attempted. The
+ violation). Instead, they continue executing subsequent data sets until
+ every element in the batch has been attempted. The
<code>continue_failed</code> argument in the bulk functions listed
above specifies whether ODB should extend this behavior and continue
with subsequent batches if the one it has tried to execute has failed
@@ -19042,20 +19094,19 @@ multiple exceptions, 4 elements attempted, 2 failed:
[3] 1: ORA-00001: unique constraint (ODB_TEST.person_last_i) violated
</pre>
- <p>Both databases that currently are capable of bulk operations return
- a total count of affected rows rather than individual counts for
- each data set. This limitation prevents ODB from being able to
- always determine which elements in the batch haven't affected
- any rows and, for the update and erase operations, translate
- this to the <code>object_not_persistent</code> exceptions. As
- a result, if some elements in the batch haven't affected any
- rows and ODB is unable to determine exactly which ones, it will mark
- all the elements in this batch as "maybe not persistent". That
- is, it will insert the <code>object_not_persistent</code> exception
- and set the <code>maybe</code> flag for every position in the
- batch. The diagnostics string returned by <code>what()</code>
- will also reflect this situation, for example (assuming batch
- size of 3):</p>
+ <p>Some databases that currently are capable of bulk operations
+ (specifically, Oracle and SQL Server) return a total count of affected
+ rows rather than individual counts for each data set. This limitation
+ prevents ODB from being able to always determine which elements in the
+ batch haven't affected any rows and, for the update and erase operations,
+ translate this to the <code>object_not_persistent</code> exceptions. As a
+ result, if some elements in the batch haven't affected any rows and ODB
+ is unable to determine exactly which ones, it will mark all the elements
+ in this batch as "maybe not persistent". That is, it will insert
+ the <code>object_not_persistent</code> exception and set
+ the <code>maybe</code> flag for every position in the batch. The
+ diagnostics string returned by <code>what()</code> will also reflect this
+ situation, for example (assuming batch size of 3):</p>
<pre class="terminal">
multiple exceptions, 4 elements attempted, 4 failed:
@@ -22848,6 +22899,13 @@ SHOW integer_datetimes
ones, as discussed in <a href="#14.8">Section 14.8, "Database
Type Mapping Pragmas"</a>.</p>
+ <h3><a name="19.5.7">19.5.7 Bulk Operations Support</a></h3>
+
+ <p>Support for bulk operations (<a href="#15.3">Section 15.3, "Bulk
+ Database Operations"</a>) requires PostgreSQL client library
+ (<code>libpq</code>) version 14 or later and PostgreSQL server
+ version 7.4 or later.</p>
+
<h2><a name="19.6">19.6 PostgreSQL Index Definitions</a></h2>