Cassandra Architecture

Here we back with another stuff Cassandra Architecture after my previous article Apace Cassandra tutorial in know all the things about it.

Architecture of Cassandra

The design goal of Cassandra is to handle big data workloads across multiple nodes without any single point of failure. Cassandra has peer-to-peer distributed system across its nodes, and data is distributed among all the nodes in a cluster.

  • All the nodes in a cluster play the same role. Each node is independent and at the same time interconnected to other nodes.
  • Each node in a cluster can accept read and write requests, regardless of where the data is actually located in the cluster.
  • When a node goes down, read/write requests can be served from other nodes in the network.

Data Replication in Cassandra

In Cassandra, one or more of the nodes in a cluster act as replicas for a given piece of data. If it is detected that some of the nodes responded with an out-of-date value, Cassandra will return the most recent value to the client. After returning the most recent value, Cassandra performs a read repair in the background to update the stale values.

The following figure shows a schematic view of how Cassandra uses data replication among the nodes in a cluster to ensure no single point of failure.

Data Replication

Note − Cassandra uses the Gossip Protocol in the background to allow the nodes to communicate with each other and detect any faulty nodes in the cluster.

Read Also: Apache Cassandra tutorial

Cassandra Cluster/Ring

Cluster Bootstrapping

Every Cassandra cluster must be assigned a name. All nodes participating in a cluster have the same name. Seed nodes are used during start up to help discover all participating nodes. Seeds nodes have no special purpose other than helping bootstrap the cluster using the gossip protocol. When a node starts up it looks to its seed list to obtain information about the other nodes in the cluster. Cassandra uses the gossip protocol for intra cluster communication and failure detection. A node exchanges state information with a maximum of three other nodes. State information is exchanged every second and contains information about itself and all other known nodes.  This enables each node to learn about every other node in the cluster even though it is communicating with a small subset of nodes.

Cassandra Ring

Cassandra Ring

Example Cassandra ring distributing 255 tokens evenly across four nodes.

A Cassandra cluster is visualised as a ring because it uses a consistent hashing algorithm to distribute data. At start up each node is assigned a token range which determines its position in the cluster and the rage of data stored by the node. Each node receives a proportionate range of the token ranges to ensure that data is spread evenly across the ring. The figure above illustrates dividing a 0 to 255 token range evenly amongst a four node cluster. Each node is assigned a token and is responsible for token values from the previous token (exclusive) to the node’s token (inclusive). Each node in a Cassandra cluster is responsible for a certain set of data which is determined by the partitioner. A partitioner is a hash function for computing the resultant token for a particular row key. This token is then used to determine the node which will store the first replica.  Currently Cassandra offers a Murmur 3 Partitioner (default), Random Partitioner and a Byte Ordered Partitioner.

Replication

Cassandra also replicates data according to the chosen replication strategy. The replication strategy determines placement of the replicated data.  There are two main replication strategies used by Cassandra, Simple Strategy and the Network Topology Strategy. The first replica for the data is determined by the partitioner. The placement of the subsequent replicas is determined by the replication strategy. The simple strategy places the subsequent replicas on the next node in a clockwise manner. The network topology strategy works well when Cassandra is deployed across data centres. The  network topology strategy is data centre aware and makes sure that replicas are not stored on the same rack. Cassandra uses snitches to discover the overall network overall topology.  This information is used to efficiently route inter-node requests within the bounds of the replica placement strategy.

Read Also: Big Data Analytics Courses in Canada

Components of Cassandra

The key components of Cassandra are as follows −
  • Node − It is the place where data is stored.
  • Data center − It is a collection of related nodes.
  • Cluster − A cluster is a component that contains one or more data centers.
  • Commit log − The commit log is a crash-recovery mechanism in Cassandra. Every write operation is written to the commit log.
  • Mem-table − A mem-table is a memory-resident data structure. After commit log, the data will be written to the mem-table. Sometimes, for a single-column family, there will be multiple mem-tables.
  • SSTable − It is a disk file to which the data is flushed from the mem-table when its contents reach a threshold value.
  • Bloom filter − These are nothing but quick, nondeterministic, algorithms for testing whether an element is a member of a set. It is a special kind of cache. Bloom filters are accessed after every query.

Cassandra Query Language

Users can access Cassandra through its nodes using Cassandra Query Language (CQL). CQL treats the database (Keyspace) as a container of tables. Programmers use cqlsh: a prompt to work with CQL or separate application language drivers.

Clients approach any of the nodes for their read-write operations. That node (coordinator) plays a proxy between the client and the nodes holding the data.

Read Also: Stream Analysis – Approach to Organizational change

Write Operations

Every write activity of nodes is captured by the commit logs written in the nodes. Later the data will be captured and stored in the mem-table.Whenever the mem-table is full, data will be written into the SStable data file. All writes are automatically partitioned and replicated throughout the cluster. Cassandra periodically consolidates the SSTables, discarding unnecessary data.

Read Operations

During read operations, Cassandra gets values from the mem-table and checks the bloom filter to find the appropriate SSTable that holds the required data.

Akash Gohil

Hello !!! You are Reading Writings of Very Popular Developer of India and Founder of www.technofitbooks.com is a Tech Geek and Passionate about Technology, Gadgets and fitness tips. I am a WordPress Developer and contact for WordPress Development Services as well as SEO Services.

Leave a Reply

%d bloggers like this: