gtag('config', 'G-0PFHD683JR');
Price Prediction

The ability to expand microscopic services: create systems that can expand their range without trouble

Your Nodejs service is subject to a lot of pressure from RPS (requests per second), and you pray so that you do not wear it by page. The microscopic rescue services can come – as long as you are not located in the common javascript traps. In this guide, I will show you some of the concepts of expansion using the actual JS applications starting from the service decomposition


1. Service decomposition: the art of breaking the stroke

The issue: “God’s service” trap

Think about the comprehensive Express app with users, requests, payments and stocks. She works … until the payment service fails and destroys it, which brings login to the user with it.

// 🚫 Monolithic disaster (app.js)  
const express = require('express');  
const app = express();  

// User routes  
app.post('/users', (req, res) => { /* ... */ });  

// Order routes  
app.post('/orders', (req, res) => {  
  // Checks inventory, processes payment, updates user history...  
});  

// Payment routes  
app.post('/payments', (req, res) => { /* ... */ });  

app.listen(3000); 

The solution: DDD design for Express

Divide multiple services:

  • User service (user service/index.js):
const express = require('express');  
const app = express();  
app.post('/users', (req, res) => { /* ... */ });  
app.listen(3001);
  • Request service (demand service/index.js):
const express = require('express');  
const app = express();  
app.post('/orders', (req, res) => { /* ... */ });  
app.listen(3002); 

Advantages:

  • Ins isolated failure: the interruption of the payment service will not lead to a failure to log in to the user.
  • Independent scaling: during sales, more centuries can be added to the order service.

flaws:

  • Network transmission time: Services are now speaking with each other via http (bad deadline!).
  • Devops Complications: Instead of spreading one, four services should be deployed.

2. Communication: escaping from the heinous synchronous hell

Problem: A deadline after a deadline

Within the application service, the user service is connected and with the payment service simultaneously. One slow response that affects the entire flow

// 🚫 Order service (order-service/index.js)  
const axios = require('axios');  

app.post('/orders', async (req, res) => {  
  // Call user service  
  const user = await axios.get('http://user-service:3001/users/123');  

  // Call payment service  
  const payment = await axios.post('http://payment-service:3003/payments', {  
    userId: user.id,  
    amount: 100  
  });  
  // ...  
}); 

Solution: Merging RabbitmQ into a simultaneous system

Take advantage of the messaging broker for greater independence of the system:

  • The application service creates and sends an event created.
  • Take the payment service in the event and treat the user’s payment.
// Order Service (publish event)  
const amqp = require('amqplib');  

async function publishOrderCreated(order) {  
  const conn = await amqp.connect('amqp://localhost');  
  const channel = await conn.createChannel();  
  await channel.assertExchange('orders', 'topic', { durable: true });  
  channel.publish('orders', 'order.created', Buffer.from(JSON.stringify(order)));  
}  

app.post('/orders', async (req, res) => {  
  const order = createOrder(req.body);  
  await publishOrderCreated(order); // Non-blocking  
  res.status(202).json({ status: 'processing' });  
}); 
// Payment Service (consume event)  
const amqp = require('amqplib');  

async function consumeOrders() {  
  const conn = await amqp.connect('amqp://localhost');  
  const channel = await conn.createChannel();  
  await channel.assertExchange('orders', 'topic', { durable: true });  
  const queue = await channel.assertQueue('', { exclusive: true });  
  channel.bindQueue(queue.queue, 'orders', 'order.created');  

  channel.consume(queue.queue, (msg) => {  
    const order = JSON.parse(msg.content.toString());  
    processPayment(order);  
    channel.ack(msg);  
  });  
}  

consumeOrders(); 

Pros:

  • What are the advantages of the payment service that consumes the event?
  • Services are separated: payment service is broken? Not a problem, messages accumulate and try again at a later time.
  • Application service, 202 responds faster, that is.

cons:

  • Although the system gets the system, payment services have a lot of integration problems in the system.
  • Correcting difficult errors: Payment malfunction through waiting lists can require something like the Rabbit MQ interface.
  • These are some of the main defects of the architecture driven by events that can be observed after the occurrence of events.

3. Data Management: Do not share databases

Problem: Associated Database

All small services work on a joint postgresql orders set. Since the approach of everything in one looks elegant, this may break the service due to the changes in the inventory microscopy.

Reform: Each service has its own database + event sources.

  • Request service: She has its own requests and owns it.
  • Stock Service: DB retains a separate for example. Redis to calculate stocks.

Example: Sources of the event towards achieving consistency

// Order Service saves events  
const { OrderEvent } = require('./models');  

async function createOrder(orderData) {  
  await OrderEvent.create({  
    type: 'ORDER_CREATED',  
    payload: orderData  
  });  
}  

// Materialized view for queries  
const { Order } = require('./models');  

async function rebuildOrderView() {  
  const events = await OrderEvent.findAll();  
  // Replay events to build current state  
  const orders = events.reduce((acc, event) => {  
    // Apply event logic (e.g., add order)  
  }, {});  
  await Order.bulkCreate(orders);  
} 

Pros:

  • Auditing record: Each one change is recorded in the case as an event.
  • Reconstruction: The views can be rebuilt if the elasticity requirements change.

cons:

  • Architectural complexity: There should also be a mechanism for restarting events.
  • Increased storage cost: The database can quickly lose its efficiency because millions of events can display its safety.

4. Publishing: automatically with kubernetes

The problem: You need to expand at 3 am manually

You want to get the service service and measure the +1 of the EC2 in the peak traffic times.

Repair: Payment service container in the publishing file.

Select the publication process. YAML Payment Service:

apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: payment-service  
spec:  
  replicas: 2  
  template:  
    spec:  
      containers:  
      - name: payment  
        image: your-registry/payment-service:latest  
        ports:  
        - containerPort: 3003  
        resources:  
          requests:  
            cpu: "100m"  
          limits:  
            cpu: "200m"  
---  
apiVersion: autoscaling/v2  
kind: HorizontalPodAutoscaler  
metadata:  
  name: payment-service  
spec:  
  scaleTargetRef:  
    apiVersion: apps/v1  
    kind: Deployment  
    name: payment-service  
  minReplicas: 2  
  maxReplicas: 10  
  metrics:  
  - type: Resource  
    resource:  
      name: cpu  
      target:  
        type: Utilization  
        averageUtilization: 70 

Pros:

  • Self -recovery: When the containers are disrupted, kubernetes re -download the default.
  • Cost savings: When there is no traffic at night, write down.

cons:

  • Over pregnancy: Yaml: The composition is the new chaos.
  • The cold begins: They take some time to start.

5. Note: records, effects, and standards.

The problem: “The payment service is slow.”

Exit with a solution without records will make you guess where failure happens.

Reform: Winston + OpenTEEMETRY

// Logging with Winston (payment-service/logger.js)  
const winston = require('winston');  

const logger = winston.createLogger({  
  level: 'info',  
  format: winston.format.json(),  
  transports: [  
    new winston.transports.File({ filename: 'error.log', level: 'error' }),  
    new winston.transports.Console()  
  ]  
});  

// In your route handler  
app.post('/payments', async (req, res) => {  
  logger.info('Processing payment', { userId: req.body.userId });  
  // ...  
});

Track distributed with Obintil measurement:

const { NodeTracerProvider } = require('@opentelemetry/sdk-trace-node');  
const { SimpleSpanProcessor } = require('@opentelemetry/sdk-trace-base');  
const { JaegerExporter } = require('@opentelemetry/exporter-jaeger');  

const provider = new NodeTracerProvider();  
provider.addSpanProcessor(  
  new SimpleSpanProcessor(new JaegerExporter({ endpoint: 'http://jaeger:14268/api/traces' }))  
);  
provider.register(); 

Pros:

  • Track flows: Understand how the request flows through services.
  • The context of the error: Records contain the user’s identifier, order identifiers, etc.

cons:

  • Performing performance: He added the general expenses of tracking.
  • The tool extends: Gayjir, Prometheus, Gravana. Many tools.

6. The rift tolerance: the circuit breakers and its reservation

The problem: the state transformations is invalid – consecutive failure

The user service dies, and the request service continues to invoke the user service in ODER to try to succeed, and perhaps the DOS-from itself.

Reform: Cocktail of Reintepse of Reintepse

const { Policy, handleAll, circuitBreaker } = require('cockatiel');  

// Circuit breaker: stop calling a failing service  
const breaker = circuitBreaker(handleAll, {  
  halfOpenAfter: 10_000,  
  breaker: {  
    threshold: 0.5, // 50% failure rate trips the breaker  
    duration: 30_000  
  }  
});  

// Retry with exponential backoff  
const retry = Policy  
  .handleAll()  
  .retry()  
  .attempts(3)  
  .exponential();  

// Wrap API calls  
app.post('/orders', async (req, res) => {  
  try {  
    await retry.execute(() =>  
      breaker.execute(() => axios.get('http://user-service:3001/users/123'))  
    );  
  } catch (error) {  
    // Fallback logic  
  }  
});

Pros:

  • Failure quickly: Stop trying to access a broken service.
  • Self -recovery: After some time, reset the fracture.

cons:

  • Hell composition: You need to control over again and repeat your thresholds.
  • The logic of return: You will still need to deal with the failed logic elegantly.

Staying alive maze of micro -services: common questions

Q: When is it better to dismantle a compact?
A: Various publishing sections are suspended while waiting for other post.

  • Certain parts of the application need more resources than others (for example: analyzes against payments).
  • You need to solve endless inclusion conflicts in “Package.json”.

Q: Relacement against Graphql versus GRPC. Differences?

A: The rest: used for public applications programming programs (such as mobile phone applications).

  • Graphql: When customers need to pull dynamic data (example: responsible information panels).
  • GRPC: For use with internal services where performance is very important (Protobuff FTW).

Q: What is the approach to solving distributed transactions? \ A: Implementing the epic pattern:

  1. A request is created by the application service (Status: suspended).
  2. The payment service is trying to impose fees on the user.
  3. If the user is not successfully shipped, the application service sets the failure and informs the user.

Final ideas

The expansion of microscopic services with node.js is similar to leaflets – cheerful but also risky. Use a step -by -step approach; Start with a small solution, then start separating services when necessary. You always have emergency plans when things get worse. Remember: the ability to note is not good to obtain, it is a must. You are unable to solve problems that you cannot see.

So go out and conquer that stroke. Your OPS team will be grateful. 🔥

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button