diff --git a/docs/README.md b/docs/README.md index dbc1e61..7ff439d 100644 --- a/docs/README.md +++ b/docs/README.md @@ -84,16 +84,6 @@ insert_res.job insert_res.unique_skipped_as_duplicated ``` -### Custom advisory lock prefix - -Unique job insertion takes a Postgres advisory lock to make sure that its uniqueness check still works even if two conflicting insert operations are occurring in parallel. Postgres advisory locks share a global 64-bit namespace, which is a large enough space that it's unlikely for two advisory locks to ever conflict, but to _guarantee_ that River's advisory locks never interfere with an application's, River can be configured with a 32-bit advisory lock prefix which it will use for all its locks: - -```ruby -client = River::Client.new(mock_driver) -``` - -Doing so has the downside of leaving only 32 bits for River's locks (64 bits total - 32-bit prefix), making them somewhat more likely to conflict with each other. - ## Inserting jobs in bulk Use `#insert_many` to bulk insert jobs as a single operation for improved efficiency: diff --git a/lib/insert_opts.rb b/lib/insert_opts.rb index baf24da..236038c 100644 --- a/lib/insert_opts.rb +++ b/lib/insert_opts.rb @@ -72,12 +72,6 @@ def initialize( # given job kind, a single instance is allowed for each combination of args # and queues. If either args or queue is changed on a new job, it's allowed to # be inserted as a new job. - # - # Uniquenes is checked at insert time by taking a Postgres advisory lock, - # doing a look up for an equivalent row, and inserting only if none was found. - # There's no database-level mechanism that guarantees jobs stay unique, so if - # an equivalent row is inserted out of band (or batch inserted, where a unique - # check doesn't occur), it's conceivable that duplicates could coexist. class UniqueOpts # Indicates that uniqueness should be enforced for any specific instance of # encoded args for a job.