Signal DeNoising using Auto Encoders

Signal

A signal may be defined as any observable change in a quantity over space or time, even if it does not carry information. They can mainly be classified into two types :

  • Analog Signal
  • Digital Signal

\

Analog Signal

An analog signal is a continuous stream of values. There are multiple possible values.

\

Digital Signal

A digital signal is a discrete stream of values. There are only certain possible values.

\

Project Introduction

This project aims to generate a sinusoidal signal, add Additive White Gaussian Noise (AWGN) to it and denoise it using Autoencoder models.

\

Library Import

\

import numpy as np
import matplotlib.pyplot as plt

\

Generating the Sinusoidal Signal

To generate a sample sinusoidal signal we can use the code below

\
t = np.linspace(1,100,1000)

v = 10*np.sin(t/(2*np.pi))

\
This would generate a signal which would look like

\

\
Now we calculate the power of the above-generated signal by using

\

w = v ** 2

\
The generated signal would now look like

\

Now to convert the power from watts to dB we would use the following code

\

w_db = 10 * np.log10(w)

\
The power plotted in dB would look like

\

\

Noise Generation

We choose a target SnR or Signal to Noise Ratio, calculate the avg power and convert it to dB. Then we calculate the noise avg in dB, convert it to watts then sample from a normal distribution using the calculated parameters and add it to the generated signal to get the noisy signal.

\

target_snr_db = 20
# Calculate signal power and convert to dB 
sig_avg_watts = np.mean(w)
sig_avg_db = 10 * np.log10(sig_avg_watts)
# Calculate noise according to [2] then convert to watts
noise_avg_db = sig_avg_db - target_snr_db
noise_avg_watts = 10 ** (noise_avg_db / 10)
# Generate an sample of white noise
mean_noise = 0
noise_volts = np.random.normal(mean_noise, np.sqrt(noise_avg_watts), len(w))
# Noise up the original signal
y_volts = v + noise_volts

\
After this, the signal with noise would look like

\

For training the Deep Learning model we would need some data samples. So, we would randomly generate those samples by defining functions that use the above signal and noise generator logic.

\

def signal_gen():
  l = np.random.randint(1, 100)
  t = np.linspace(1,l,1000)
  v = 10*np.sin(t/(2*np.pi)) / 1000
  return v


def noise_gen(v):
  w = v ** 2
  target_snr_db = 20
  sig_avg_watts = np.mean(w)
  sig_avg_db = 10 * np.log10(sig_avg_watts)
  noise_avg_db = sig_avg_db - target_snr_db
  noise_avg_watts = 10 ** (noise_avg_db / 10)
  mean_noise = 0
  noise_volts = np.random.normal(mean_noise, np.sqrt(noise_avg_watts), len(w))
  y_volts = v + noise_volts
  return y_volts

\
To view a sample from the generated dataset we can use the following code snippet.

\

v = signal_gen()

plt.subplot(2,1,1)
plt.title("Random Signal")
plt.plot( v)
plt.show()

plt.subplot(2,1,2)
plt.title("Random Signal with noise")
plt.plot(noise_gen(v))
plt.show()

\
The generated result would look something like this

\

To generate the dataset we use the following code snippet.

\

signal = []
noisy_signal = []

for i in range(1000):
  v = signal_gen()
  signal.append(v)
  noisy_signal.append(noise_gen(v))

\

Defining the Deep Learning Model

To perform this denoising we use a simple linear autoencoder model. This would have 1 encoder layer and 1 decoder layer. The size of each input sample would be 1000 and there would be a total of 1000 data points for the model.

\

import torch.nn as nn
import torch.nn.functional as F

class DeNoise(nn.Module):
  def __init__(self):
    super(DeNoise, self).__init__()

    self.lin1 = nn.Linear(1000, 800)
    self.lin_t1 = nn.Linear(800, 1000)


  def forward(self, x):
    x = F.tanh(self.lin1(x))
    x = self.lin_t1(x)
    return x

model = DeNoise().cuda()
print(model)

\
Here we use a tanh activation function as we know that the max and min boundaries of a sinusoidal function a -1 & 1. Having similar boundaries we found it to be best suited for this application.

\

Defining the Loss and Optimization functions

\

import torch

criterion = nn.MSELoss()

optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

\

Defining the Training Loop

\

def train(n_epochs , model):
  training_loss = []

  for epoch in range(n_epochs):
    trainloss = 0.0
    for sig, noisig in zip(signal, noisy_signal):

      sig = torch.Tensor(sig).cuda()
      noisig = torch.Tensor(noisig).cuda()

      optimizer.zero_grad()
      output = model(noisig)
      loss = criterion(output , sig) 
      loss.backward()
      optimizer.step()
      trainloss += loss.item()  
    print("Epoch: {} , Training Loss: {}".format(epoch + 1  , trainloss / len(signal)))
    training_loss.append(trainloss / len(signal))
  plt.plot(training_loss)

  print("Training Completed !!!")

\

Training the Model

\

train(10, model)

\
The result from the training would look like

\

Visualizing the Results

\

def plot(i):

  pred = model(torch.Tensor(signal[i]).cuda()).cpu()
  plt.subplot(4,1,1)
  plt.title("Original Signal")
  plt.xlabel("Voltage")
  plt.ylabel("Time")
  plt.plot(signal[i])
  plt.show()

  plt.subplot(4,1,2)
  plt.title("Noisy Signal")
  plt.xlabel("Voltage")
  plt.ylabel("Time")
  plt.plot(noisy_signal[i])
  plt.show()

  plt.subplot(4,1,3)
  plt.title("Predicted Signal")
  plt.xlabel("Voltage")
  plt.ylabel("Time")
  plt.plot(pred.detach().numpy())
  plt.show()

\
The above block would generate results that would look like

\
\

\
\

Conclusion

\
So, in this project, we have successfully implemented a signal denoiser that uses a PyTorch-based deep learning model.

\
Code

\
GitHub: https://github.com/srimanthtenneti/Autoencoders/blob/main/Signal_Denoiser.ipynb

\
Please feel free to connect.

\
Contact

\
LinkedIn : https://www.linkedin.com/in/srimanth-tenneti-662b7117b/

Leave a Reply

Your email address will not be published. Required fields are marked *

Instagram

Why do People Say: "Developers are Lazy"?
The saying “work smart, not hard” is applicable for programmers.
.
https://hackernoon.com/why-do-people-say-developers-are-lazy

.
Author: Aga Wozniak
.
.
.
.
.
#blog #100Daysofcode #javascript #vuejs #datascientist #peoplewhocode #learntocode #coding #developerlife #frontenddeveloper #backenddeveloper #fullstackdeveloper #developer #webdeveloper #thedevlife #phpdeveloper #computerscience #programmer #programmingisfun #codingdays
...

Mitigating the DDOS Threats Facing Banks and Fintechs
As much as digitization and cyber simplified banking, the Fintech sector has left digital payment activity exposed to malicious and suspicious activity.
.
https://hackernoon.com/mitigating-the-ddos-threats-facing-banks-and-fintechs

.
Author: Josh Horowitz
.
.
.
.
.
#blog #100Daysofcode #javascript #vuejs #datascientist #peoplewhocode #learntocode #coding #developerlife #frontenddeveloper #backenddeveloper #fullstackdeveloper #developer #webdeveloper #thedevlife #phpdeveloper #computerscience #programmer #programmingisfun #codingdays
...

24 Best JavaScript Blogs and Websites
In this overview, we have compiled a list of popular sites, as well as JS blogs that are worth reading and keeping in your bookmarks.
.
https://hackernoon.com/24-best-javascript-blogs-and-websites

.
Author: natashatsybliyenko
.
.
.
.
.
#blog #100Daysofcode #javascript #vuejs #datascientist #peoplewhocode #learntocode #coding #developerlife #frontenddeveloper #backenddeveloper #fullstackdeveloper #developer #webdeveloper #thedevlife #phpdeveloper #computerscience #programmer #programmingisfun #codingdays
...

The Projects Working to Lower Ethereum Gas Fees
As more investors try their hand at DeFi, gas fees are shooting over the roof, making engaging with decentralized apps uneconomical for most users.
.
https://hackernoon.com/ethereum-gas-fees-are-there-any-projects-working-to-optimize-eth-gas-fees

.
Author: CryptoVirally SLR
.
.
.
.
.
#blog #100Daysofcode #javascript #vuejs #datascientist #peoplewhocode #learntocode #coding #developerlife #frontenddeveloper #backenddeveloper #fullstackdeveloper #developer #webdeveloper #thedevlife #phpdeveloper #computerscience #programmer #programmingisfun #codingdays
...

On the Edge of a New Year: IT Predictions for 2022
The single biggest cause of network errors are people.
.
https://hackernoon.com/an-interview-with-uplogix-ceo-lisa-frankovitch

.
Author: Mignonette Garnier
.
.
.
.
.
#blog #100Daysofcode #javascript #vuejs #datascientist #peoplewhocode #learntocode #coding #developerlife #frontenddeveloper #backenddeveloper #fullstackdeveloper #developer #webdeveloper #thedevlife #phpdeveloper #computerscience #programmer #programmingisfun #codingdays
...

How to Modernize IBM i Applications
If you’re like most IBM i users, you know how much value your IBM i data and applications bring to your business. Your end-users, however, may not. In today’s world of rich user experience, fast-paced application development, and constantly evolving customer expectations, IBM i applications are unde…
.
https://hackernoon.com/how-to-modernize-ibm-i-applications

.
Author: Lansa
.
.
.
.
.
#blog #100Daysofcode #javascript #vuejs #datascientist #peoplewhocode #learntocode #coding #developerlife #frontenddeveloper #backenddeveloper #fullstackdeveloper #developer #webdeveloper #thedevlife #phpdeveloper #computerscience #programmer #programmingisfun #codingdays
...