LeaderElection

class Object
trait Matchable
class Any

Type members

Classlikes

class Live(contextInfo: Service, lock: LeaderLock) extends Service
class LiveTemporary(contextInfo: Service, lock: LeaderLock, leadershipLost: Queue[Unit]) extends Service
trait Service

Value members

Concrete methods

def configMapLock(lockName: String, retryPolicy: Schedule[Any, Any, Unit], deleteLockOnRelease: Boolean): ZLayer[ContextInfo & ConfigMaps & Pods, Nothing, LeaderElection]

Simple leader election implementation

Simple leader election implementation

The algorithm tries creating a ConfigMap with a given name and attaches the Pod it is running in as an owner of the config map.

If the ConfigMap already exists the leader election fails and retries with exponential backoff. If it succeeds then it runs the inner effect.

When the code terminates normally the acquired ConfigMap gets released. If the whole Pod gets killed without releasing the resource, the registered ownership will make Kubernetes apply cascading deletion so eventually a new Pod can register the ConfigMap again.

def customLeaderLock(lockName: String, retryPolicy: Schedule[Any, Any, Unit], deleteLockOnRelease: Boolean): ZLayer[ContextInfo & LeaderLockResources & Pods, Nothing, LeaderElection]

Simple leader election implementation based on a custom resource

Simple leader election implementation based on a custom resource

The algorithm tries creating a LeaderLock resource with a given name and attaches the Pod it is running in as an owner of the config map.

If the LeaderLock already exists the leader election fails and retries with exponential backoff. If it succeeds then it runs the inner effect.

When the code terminates normally the acquired LeaderLock gets released. If the whole Pod gets killed without releasing the resource, the registered ownership will make Kubernetes apply cascading deletion so eventually a new Pod can register the LeaderLock again.

This method requires the registration of the LeaderLock custom resource. As an alternative take a look at configMapLock.

def fromLock: ZLayer[Has[LeaderLock] & ContextInfo, Nothing, LeaderElection]

Constructs a leader election interface using a given LeaderLock layer

Constructs a leader election interface using a given LeaderLock layer

For built-in leader election algorithms check configMapLock and customLeaderLock.

def leaseLock(lockName: String, leaseDuration: Duration, renewTimeout: Duration, retryPeriod: Duration): ZLayer[Random & ContextInfo & Leases, Nothing, LeaderElection]

Lease based leader election implementation

Lease based leader election implementation

The leadership is not guaranteed to be held forever, the effect executed in runAsLeader may be interrupted. It is recommended to retry runAsLeader in these cases to try to reacquire the lease.

This is a reimplementation of the Go leaderelection package: https://github.com/kubernetes/client-go/blob/master/tools/leaderelection/leaderelection.go

Value Params
leaseDuration

Duration non-leader candidates must wait before acquiring leadership. This is measured against the time of the last observed change.

lockName

Name of the lease resource

renewTimeout

The maximum time a leader is allowed to try to renew its lease before giving up

retryPeriod

Retry period for acquiring and renewing the lease

Concrete fields

val defaultRetryPolicy: Schedule[Any, Any, Unit]

Default retry policy for acquiring the lock

Default retry policy for acquiring the lock