Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Minor changes #177

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Apr 29, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ const Uuid = cassandra.types.Uuid;
const client = new cassandra.Client(getClientArgs());

/**
* Inserts multiple rows in a table from an Array using the built in method <code>executeConcurrent()</code>,
* Inserts multiple rows in a table from an Array using the built in method `executeConcurrent()`,
* limiting the amount of parallel requests.
*/
async function example() {
Expand Down
2 changes: 1 addition & 1 deletion 2 examples/DataStax/concurrent-executions/execute-in-loop.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ const client = new cassandra.Client(getClientArgs());
/**
* Inserts multiple rows in a table limiting the amount of parallel requests.
*
* Note that here is a built-in method in the driver <code>executeConcurrent()</code> that allows you to execute
* Note that here is a built-in method in the driver `executeConcurrent()` that allows you to execute
* multiple simultaneous requests using an Array or a Stream. Check out execute-concurrent-array.js for more
* information.
*
Expand Down
4 changes: 2 additions & 2 deletions 4 examples/paging/each-row-auto-paged.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ const client = new cassandra.Client(getClientArgs());
* See https://github.com/caolan/async
* Alternately you can use the Promise-based API.
*
* Inserts 100 rows and retrieves them with ``eachRow()`` with automatic paging
* Inserts 100 rows and retrieves them with `eachRow()` with automatic paging
*/

async.series(
Expand All @@ -34,7 +34,7 @@ async.series(
},
async function insert(next) {
// This can also be done concurrently to speed up this process.
// Check ``concurrent-executions`` to how it can be done.
// Check `concurrent-executions` to how it can be done.
const query =
"INSERT INTO examples.autoPaged (id, txt, val) VALUES (?, ?, ?)";
for (let i = 0; i < 100; i++) {
Expand Down
4 changes: 2 additions & 2 deletions 4 examples/paging/each-row.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ const client = new cassandra.Client(getClientArgs());
* See https://github.com/caolan/async
* Alternately you can use the Promise-based API.
*
* Inserts 100 rows and retrieves them with ``eachRow()`` with manual paging
* Inserts 100 rows and retrieves them with `eachRow()` with manual paging
*/

async.series(
Expand All @@ -34,7 +34,7 @@ async.series(
},
async function insert(next) {
// This can also be done concurrently to speed up this process.
// Check ``concurrent-executions`` to how it can be done.
// Check `concurrent-executions` to how it can be done.
const query =
"INSERT INTO examples.eachRow (id, txt, val) VALUES (?, ?, ?)";
for (let i = 0; i < 100; i++) {
Expand Down
52 changes: 26 additions & 26 deletions 52 lib/client-options.js
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ const errors = require("./errors.js");
* It configures the authentication provider to be used against Apache Cassandra's PasswordAuthenticator or DSE's
* DseAuthenticator, when default auth scheme is plain-text.
*
* Note that you should configure either ``credentials`` or ``authProvider`` to connect to an
* Note that you should configure either `credentials` or `authProvider` to connect to an
* auth-enabled cluster, but not both.
*
* @property {String} [credentials.username] The username to use for plain-text authentication.
Expand All @@ -67,7 +67,7 @@ const errors = require("./errors.js");
* versions that support it.
* [TODO: Add support for this field]
* @property {Boolean} [monitorReporting.enabled=true] Determines whether the reporting mechanism is enabled.
* Defaults to ``true``.
* Defaults to `true`.
* [TODO: Add support for this field]
* @property {Object} [cloud] The options to connect to a cloud instance.
* [TODO: Add support for this field Remove?]
Expand All @@ -79,19 +79,19 @@ const errors = require("./errors.js");
* @property {Boolean} [isMetadataSyncEnabled] Determines whether client-side schema metadata retrieval and update is
* enabled.
*
* Setting this value to ``false`` will cause keyspace information not to be automatically loaded, affecting
* Setting this value to `false` will cause keyspace information not to be automatically loaded, affecting
* replica calculation per token in the different keyspaces. When disabling metadata synchronization, use
* [Metadata.refreshKeyspaces()]{@link module:metadata~Metadata#refreshKeyspaces} to keep keyspace information up to
* date or token-awareness will not work correctly.
*
* Default: ``true``.
* Default: `true`.
* [TODO: Add support for this field]
* @property {Boolean} [prepareOnAllHosts] Determines if the driver should prepare queries on all hosts in the cluster.
* Default: ``true``.
* Default: `true`.
* [TODO: Add support for this field]
* @property {Boolean} [rePrepareOnUp] Determines if the driver should re-prepare all cached prepared queries on a
* host when it marks it back up.
* Default: ``true``.
* Default: `true`.
* [TODO: Add support for this field]
* @property {Number} [maxPrepared] Determines the maximum amount of different prepared queries before evicting items
* from the internal cache. Reaching a high threshold hints that the queries are not being reused, like when
Expand All @@ -108,20 +108,20 @@ const errors = require("./errors.js");
* [TODO: Add support for this field]
* @property {AddressTranslator} [policies.addressResolution] The address resolution policy.
* [TODO: Add support for this field]
* @property {SpeculativeExecutionPolicy} [policies.speculativeExecution] The ``SpeculativeExecutionPolicy``
* @property {SpeculativeExecutionPolicy} [policies.speculativeExecution] The `SpeculativeExecutionPolicy`
* instance to be used to determine if the client should send speculative queries when the selected host takes more
* time than expected.
*
* Default: ``[NoSpeculativeExecutionPolicy]{@link
* module:policies/speculativeExecution~NoSpeculativeExecutionPolicy}``
* Default: `[NoSpeculativeExecutionPolicy]{@link
* module:policies/speculativeExecution~NoSpeculativeExecutionPolicy}`
*
* [TODO: Add support for this field]
* @property {TimestampGenerator} [policies.timestampGeneration] The client-side
* [query timestamp generator]{@link module:policies/timestampGeneration~TimestampGenerator}.
*
* Default: ``[MonotonicTimestampGenerator]{@link module:policies/timestampGeneration~MonotonicTimestampGenerator}``
* Default: `[MonotonicTimestampGenerator]{@link module:policies/timestampGeneration~MonotonicTimestampGenerator}`
*
* Use ``null`` to disable client-side timestamp generation.
* Use `null` to disable client-side timestamp generation.
*
* [TODO: Add support for this field]
* @property {QueryOptions} [queryOptions] Default options for all queries.
Expand Down Expand Up @@ -157,11 +157,11 @@ const errors = require("./errors.js");
* [TODO: Add support for this field]
* @property {Boolean} [protocolOptions.noCompact] When set to true, enables the NO_COMPACT startup option.
*
* When this option is supplied ``SELECT``, ``UPDATE``, ``DELETE``, and ``BATCH``
* statements on ``COMPACT STORAGE`` tables function in "compatibility" mode which allows seeing these tables
* When this option is supplied `SELECT`, `UPDATE`, `DELETE`, and `BATCH`
* statements on `COMPACT STORAGE` tables function in "compatibility" mode which allows seeing these tables
* as if they were "regular" CQL tables.
*
* This option only effects interactions with interactions with tables using ``COMPACT STORAGE`` and is only
* This option only effects interactions with interactions with tables using `COMPACT STORAGE` and is only
* supported by C* 3.0.16+, 3.11.2+, 4.0+ and DSE 6.0+.
*
* [TODO: Add support for this field]
Expand All @@ -181,7 +181,7 @@ const errors = require("./errors.js");
* Please note that this is not the maximum time a call to {@link Client#execute} may have to wait;
* this is the maximum time that call will wait for one particular Cassandra host, but other hosts will be tried if
* one of them timeout. In other words, a {@link Client#execute} call may theoretically wait up to
* ``readTimeout * number_of_cassandra_hosts`` (though the total number of hosts tried for a given query also
* `readTimeout * number_of_cassandra_hosts` (though the total number of hosts tried for a given query also
* depends on the LoadBalancingPolicy in use).
*
* When setting this value, keep in mind the following:
Expand All @@ -190,7 +190,7 @@ const errors = require("./errors.js");
* the Cassandra timeout settings.
* - the read timeout is only approximate and only control the timeout to one Cassandra host, not the full query.
*
* Setting a value of 0 disables read timeouts. Default: ``12000``.
* Setting a value of 0 disables read timeouts. Default: `12000`.
* [TODO: Add support for this field]
* @property {Boolean} [socketOptions.tcpNoDelay] When set to true, it disables the Nagle algorithm. Default: true.
* [TODO: Add support for this field]
Expand All @@ -203,10 +203,10 @@ const errors = require("./errors.js");
* with this instance.
* [TODO: Add support for this field]
* @property {Object} [sslOptions] Client-to-node ssl options. When set the driver will use the secure layer.
* You can specify cert, ca, ... options named after the Node.js ``tls.connect()`` options.
* You can specify cert, ca, ... options named after the Node.js `tls.connect()` options.
*
* It uses the same default values as Node.js ``tls.connect()`` except for ``rejectUnauthorized``
* which is set to ``false`` by default (for historical reasons). This setting is likely to change
* It uses the same default values as Node.js `tls.connect()` except for `rejectUnauthorized`
* which is set to `false` by default (for historical reasons). This setting is likely to change
* in upcoming versions to enable validation by default.
*
* [TODO: Add support for this field]
Expand All @@ -233,12 +233,12 @@ const errors = require("./errors.js");
* [TODO: Add support for this field]
* @property {Boolean} [encoding.useUndefinedAsUnset] Valid for Cassandra 2.2 and above. Determines that, if a parameter
* is set to
* ``undefined`` it should be encoded as ``unset``.
* `undefined` it should be encoded as `unset`.
*
* By default, ECMAScript ``undefined`` is encoded as ``null`` in the driver. Cassandra 2.2
* By default, ECMAScript `undefined` is encoded as `null` in the driver. Cassandra 2.2
* introduced the concept of unset.
* At driver level, you can set a parameter to unset using the field ``types.unset``. Setting this flag to
* true allows you to use ECMAScript undefined as Cassandra ``unset``.
* At driver level, you can set a parameter to unset using the field `types.unset`. Setting this flag to
* true allows you to use ECMAScript undefined as Cassandra `unset`.
*
* Default: true.
*
Expand All @@ -249,15 +249,15 @@ const errors = require("./errors.js");
* @property {Boolean} [encoding.useBigIntAsVarint] Use [BigInt ECMAScript type](https://tc39.github.io/proposal-bigint/)
* to represent CQL varint data type.
*
* Note, that using Integer as Varint (``useBigIntAsVarint == false``) is deprecated.
* Note, that using Integer as Varint (`useBigIntAsVarint == false`) is deprecated.
* [TODO: Add support for this field]
* @property {Array.<ExecutionProfile>} [profiles] The array of [execution profiles]{@link ExecutionProfile}.
* [TODO: Add support for this field]
* @property {Function} [promiseFactory] Function to be used to create a ``Promise`` from a
* @property {Function} [promiseFactory] Function to be used to create a `Promise` from a
* callback-style function.
*
* Promise libraries often provide different methods to create a promise. For example, you can use Bluebird's
* ``Promise.fromCallback()`` method.
* `Promise.fromCallback()` method.
*
* By default, the driver will use the
* [Promise constructor]{@link https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise}.
Expand Down
Loading
Morty Proxy This is a proxified and sanitized view of the page, visit original site.