<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Bibek Chalise || CEO || Entrepreneur || MarginTop Solutions Pvt Ltd]]></title><description><![CDATA[Bibek Chalise || CEO || Entrepreneur || MarginTop Solutions Pvt Ltd]]></description><link>https://www.chalisebibek.com.np</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 17:47:33 GMT</lastBuildDate><atom:link href="https://www.chalisebibek.com.np/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Image Classification with PyTorch Lightning (Simple ANN)]]></title><description><![CDATA[Introduction
Hello everyone, welcome to my Blog in this tutorial, we will be learning how to do simple image classification as we did in the last tutorial but this time, we will be leveraging PyTorch Lightning to do the same task but with ease. So, I...]]></description><link>https://www.chalisebibek.com.np/image-classification-with-pytorch-lightning-simple-ann</link><guid isPermaLink="true">https://www.chalisebibek.com.np/image-classification-with-pytorch-lightning-simple-ann</guid><category><![CDATA[Python]]></category><category><![CDATA[pytorch]]></category><category><![CDATA[pytorch lightning]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[image processing]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Tue, 29 Aug 2023 06:59:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1693292056953/7465fe7c-7cc8-4cfd-bbd9-ad6b710babd3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Hello everyone, welcome to my Blog in this tutorial, we will be learning how to do simple image classification as we did in the <a target="_blank" href="https://www.chalisebibek.com.np/pytorch-101-image-classification-with-simpleann">last tutorial</a> but this time, we will be leveraging PyTorch Lightning to do the same task but with ease. So, I am planning to release both core PyTorch implementation as well as PyTorch Lightning implementation.</p>
<p><img src="https://www.datanami.com/wp-content/uploads/2023/03/lightning-ai.png" alt="Alt text" /></p>
<p>Without much due, let's dive directly into the code.</p>
<p><code>In Local Machine</code></p>
<pre><code class="lang-bash">    pip3 install lightning
</code></pre>
<p><code>In Colab Notebook</code></p>
<pre><code class="lang-python">    !pip install lightning
</code></pre>
<p>At first, we need to install <code>lightning</code> in our device. For this tutorial, we will be using <code>Colab</code>, and in colab also we have to install it.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">from</span> torch <span class="hljs-keyword">import</span> nn
<span class="hljs-keyword">import</span> torchvision
<span class="hljs-keyword">import</span> torch.nn.functional <span class="hljs-keyword">as</span> F
<span class="hljs-keyword">from</span> torchvision <span class="hljs-keyword">import</span> transforms
<span class="hljs-keyword">from</span> torch.utils.data <span class="hljs-keyword">import</span> DataLoader
<span class="hljs-keyword">import</span> lightning.pytorch <span class="hljs-keyword">as</span> pl
</code></pre>
<p>So, all the required modules are imported. Everything looks almost the same, just the import of <code>lightning.pytorch</code> as pl is new. Then, let's copy a few more lines from previous tutorials, nothing new, the same lines and the same function.</p>
<pre><code class="lang-python">
<span class="hljs-comment"># Data preprocessing and augmentation</span>
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(mean=[<span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>], std=[<span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>])
])

<span class="hljs-comment"># Load CIFAR-10 dataset</span>
dataset = torchvision.datasets.CIFAR10(root=<span class="hljs-string">'./cifar'</span>, train=<span class="hljs-literal">True</span>, transform=transform, download=<span class="hljs-literal">True</span>)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=<span class="hljs-number">64</span>, shuffle=<span class="hljs-literal">True</span>)

criterion = nn.CrossEntropyLoss()
</code></pre>
<p>The model definition will also be the same.</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SimpleANN</span>(<span class="hljs-params">nn.Module</span>):</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self</span>):</span>
        super(SimpleANN, self).__init__()
        self.fc1 = nn.Linear(<span class="hljs-number">3</span> * <span class="hljs-number">32</span> * <span class="hljs-number">32</span>, <span class="hljs-number">128</span>)  <span class="hljs-comment"># Flatten the 32x32 RGB images</span>
        self.fc2 = nn.Linear(<span class="hljs-number">128</span>, <span class="hljs-number">64</span>)
        self.fc3 = nn.Linear(<span class="hljs-number">64</span>, <span class="hljs-number">10</span>)  <span class="hljs-comment"># Output layer for 10 classes</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">forward</span>(<span class="hljs-params">self, x</span>):</span>
        x = x.view(<span class="hljs-number">-1</span>, <span class="hljs-number">3</span> * <span class="hljs-number">32</span> * <span class="hljs-number">32</span>)  <span class="hljs-comment"># Flatten the input images</span>
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        <span class="hljs-keyword">return</span> x
</code></pre>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">LitSimpleANN</span>(<span class="hljs-params">pl.LightningModule</span>):</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, model</span>):</span>
    super().__init__()
    self.model = model

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">training_step</span>(<span class="hljs-params">self, batch, batch_idx</span>):</span>
    x, y = batch
    <span class="hljs-comment"># print(y)</span>
    out = self.model(x)
    <span class="hljs-comment"># print(out)</span>
    loss = criterion(out, y)
    preds = torch.argmax(out, dim=<span class="hljs-number">1</span>)
    accuracy = torch.sum(preds == y).item() / y.size(<span class="hljs-number">0</span>)

    self.log(<span class="hljs-string">'train_loss'</span>, loss,  on_epoch=<span class="hljs-literal">True</span>)  <span class="hljs-comment"># Log training loss</span>
    self.log(<span class="hljs-string">'train_accuracy'</span>, accuracy,  on_epoch=<span class="hljs-literal">True</span>)
    <span class="hljs-keyword">return</span> loss

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">configure_optimizers</span>(<span class="hljs-params">self</span>):</span>
    optimizer = torch.optim.SGD(self.model.parameters(), lr=<span class="hljs-number">0.001</span>, momentum=<span class="hljs-number">0.9</span>)
    <span class="hljs-keyword">return</span> optimizer
</code></pre>
<p>Now, the real magical lines of PyTorch lightning come into action. We defined a class <code>LitSimpleANN</code> as a subclass of <code>PyTorch Lightning Module</code> and this class takes the above-defined <code>SimpleANN</code> class as input. Then, we define the training step inside this <code>LitSimpleANN</code> class. <code>training_step</code> function takes <code>batch</code> and <code>batch_idx</code> as input. So each batch has input images and the labels associated with the batch. So, Same as before, we fed the model the input image, it gives the probability distribution of each class and we calculate loss and accuracy on it. Here, <code>self.log</code> stores <code>loss</code> and <code>accuracy</code> per <code>step</code> and per <code>epoch</code> which can be useful during the evaluation of the model performance.</p>
<pre><code class="lang-python">model = LitSimpleANN(SimpleANN())
</code></pre>
<h3 id="heading-training-the-data">Training the data</h3>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> pytorch_lightning.loggers.csv_logs <span class="hljs-keyword">import</span> CSVLogger

trainer = pl.Trainer(max_epochs=<span class="hljs-number">10</span>,  logger=CSVLogger(<span class="hljs-string">"logs"</span>))
trainer.fit(model = model, train_dataloaders=dataloader)
</code></pre>
<p>Here, I have imported CSVLogger to save the logs in <code>CSV</code> format for further reference. Here, <code>pl.Trainer</code> takes away all the manual training loops from the custom PyTorch training loop we also discussed in the last video. So, when we hit shift enter, <code>Bam</code>, the training starts. Nothing much fancy here, <code>max_epochs=10</code> is for the number of epochs, as the default is <code>1000 epochs</code>. The output of this code is as follows:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693290932840/ed3aef5f-799e-44f3-a581-231669728fa8.png" alt /></p>
<p>Let's plot some graphs.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sn
metrics = pd.read_csv(<span class="hljs-string">f"<span class="hljs-subst">{trainer.logger.log_dir}</span>/metrics.csv"</span>)
<span class="hljs-keyword">del</span> metrics[<span class="hljs-string">"step"</span>]
metrics.set_index(<span class="hljs-string">"epoch"</span>, inplace=<span class="hljs-literal">True</span>)
sn.relplot(data=metrics, kind=<span class="hljs-string">"line"</span>)
</code></pre>
<p>The <code>mertics.csv</code> is auto-generated by trainer logger and this will give a nice line graph.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693290919549/1178dcc1-f3d9-4212-a6c1-6f563f1a4c80.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this post, we learned how to implement image classification using simple ANN in the PyTorch Lightning library. Lightening AI is fast, easy to use and saves us from unnecessary loops of code. I highly recommend you all learn about it and it will save you from hassle compared to core PyTorch code.</p>
<p>This much from today, so see you guys in the next <a target="_blank" href="https://www.chalisebibek.com.np/a-slightly-advanced-ann-in-pytorch-for-image-classification?source=more_series_bottom_blogs">tutorial</a>.</p>
<p><a target="_blank" href="https://colab.research.google.com/drive/1ajNmMB1OyvMYlcWdKOuV0tk1YewhINtP?usp=sharing">Colab</a></p>
<p><a target="_blank" href="https://www.youtube.com/@bibekchalise">YouTube</a></p>
<p><code>Bibek Chalise is a Machine Learning enthusiast, Computer Vision Scientist and is associated with MarginTop Solutions.</code></p>
<p><code>MarginTop Solutions</code></p>
<p><code>Where Tech Meets Brilliance</code></p>
<p><code>Pokhara, Nepal</code></p>
<p><code>margintopsolutions@gmail.com</code></p>
<p><a target="_blank" href="https://margintopsolutions.github.io"><code>Website</code></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1693290740053/c2cefc57-623d-466a-8eff-c10de83301af.png" alt /></p>
]]></content:encoded></item><item><title><![CDATA[A slightly Advanced ANN in PyTorch for image Classification]]></title><description><![CDATA[Introduction
Hello everyone, welcome to my Blog in today's tutorial, we will be designing a more advanced ANN for image classification. So, this will be the subsequent part of our series on Computer Vision With PyTorch. So without a due, let's dive i...]]></description><link>https://www.chalisebibek.com.np/a-slightly-advanced-ann-in-pytorch-for-image-classification</link><guid isPermaLink="true">https://www.chalisebibek.com.np/a-slightly-advanced-ann-in-pytorch-for-image-classification</guid><category><![CDATA[Python]]></category><category><![CDATA[pytorch]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Tue, 15 Aug 2023 08:05:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1692086671816/0d6d814d-7905-445e-8047-ac947b390284.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Hello everyone, welcome to my Blog in today's tutorial, we will be designing a more advanced ANN for image classification. So, this will be the subsequent part of our series on Computer Vision With PyTorch. So without a due, let's dive into the code.</p>
<p>So, this part of our code will be the same till download our dataset and create a data loader. If you are unclear about any step, I highly recommend you to check the <a target="_blank" href="https://www.chalisebibek.com.np/pytorch-101-image-classification-with-simpleann">tutorial</a>, where I have explained all about it.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">import</span> torch.nn <span class="hljs-keyword">as</span> nn
<span class="hljs-keyword">import</span> torch.optim <span class="hljs-keyword">as</span> optim
<span class="hljs-keyword">import</span> torchvision
<span class="hljs-keyword">import</span> torchvision.transforms <span class="hljs-keyword">as</span> transforms
<span class="hljs-keyword">from</span> torch.utils.data <span class="hljs-keyword">import</span> DataLoader
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt

<span class="hljs-comment"># Data preprocessing and augmentation</span>
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(mean=[<span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>], std=[<span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>])
])

<span class="hljs-comment"># Load CIFAR-10 dataset</span>
train_dataset = torchvision.datasets.CIFAR10(root=<span class="hljs-string">'./data'</span>, train=<span class="hljs-literal">True</span>, download=<span class="hljs-literal">True</span>, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=<span class="hljs-literal">True</span>)
</code></pre>
<p>Now, we need to define our model which will be structured in the same way as was previous tutorial but will have more layers.</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AdvancedANN</span>(<span class="hljs-params">nn.Module</span>):</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self</span>):</span>
        super(AdvancedANN, self).__init__()
        self.fc1 = nn.Linear(<span class="hljs-number">32</span> * <span class="hljs-number">32</span> * <span class="hljs-number">3</span>, <span class="hljs-number">1024</span>)
        self.fc2 = nn.Linear(<span class="hljs-number">1024</span>, <span class="hljs-number">512</span>)
        self.fc3 = nn.Linear(<span class="hljs-number">512</span>, <span class="hljs-number">256</span>)
        self.fc4 = nn.Linear(<span class="hljs-number">256</span>, <span class="hljs-number">128</span>)
        self.fc5 = nn.Linear(<span class="hljs-number">128</span>, <span class="hljs-number">64</span>)
        self.fc6 = nn.Linear(<span class="hljs-number">64</span>, <span class="hljs-number">10</span>)
        self.relu = nn.ReLU()

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">forward</span>(<span class="hljs-params">self, x</span>):</span>
        x = x.view(x.size(<span class="hljs-number">0</span>), <span class="hljs-number">-1</span>)  <span class="hljs-comment"># Flatten the input images</span>
        x = self.relu(self.fc1(x))
        x = self.relu(self.fc2(x))
        x = self.relu(self.fc3(x))
        x = self.relu(self.fc4(x))
        x = self.relu(self.fc5(x))
        x = self.fc6(x)
        <span class="hljs-keyword">return</span> x
</code></pre>
<p>Here, our <code>3*32*32</code> image is converted to a 1-dimensional vector of <code>3072*1</code> and feed into the network. Then, it is projected to <code>1024</code> nodes followed by <code>512</code>, <code>256</code>, <code>128</code>, <code>64</code> nodes and finally we have projected to <code>10</code> nodes because we have 10 classes in our dataset. The same is in the forward function, the view function is for reshaping the image into a single dimension, that's how we do reshaping in PyTorch.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Hyperparameters</span>
batch_size = <span class="hljs-number">64</span>
learning_rate = <span class="hljs-number">0.001</span>
num_epochs = <span class="hljs-number">10</span>

<span class="hljs-comment"># Initialize the model</span>
model = AdvancedANN()
<span class="hljs-comment"># Define loss function and optimizer</span>
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
</code></pre>
<p>This is also the same as the previous, just we will be using <code>Adam</code> as optimizer instead of <code>SGD</code>. Now, we need to define our training loop.</p>
<pre><code class="lang-python">losses = []
accuracies = []
<span class="hljs-comment"># Training loop</span>
<span class="hljs-keyword">for</span> epoch <span class="hljs-keyword">in</span> range(num_epochs):
    model.train()  <span class="hljs-comment"># Set the model to training mode</span>
    total_correct = <span class="hljs-number">0</span>
    total_samples = <span class="hljs-number">0</span>
    total_loss = <span class="hljs-number">0.0</span>
    <span class="hljs-keyword">for</span> i, (images, labels) <span class="hljs-keyword">in</span> enumerate(train_loader):
        optimizer.zero_grad()
        outputs = model(images)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        _, predicted = torch.max(outputs, <span class="hljs-number">1</span>)
        total_samples += labels.size(<span class="hljs-number">0</span>)
        total_correct += (predicted == labels).sum().item()
        total_loss += loss.item()

        <span class="hljs-keyword">if</span> (i + <span class="hljs-number">1</span>) % <span class="hljs-number">100</span> == <span class="hljs-number">0</span>:
            print(<span class="hljs-string">f"Epoch [<span class="hljs-subst">{epoch + <span class="hljs-number">1</span>}</span>/<span class="hljs-subst">{num_epochs}</span>], Step [<span class="hljs-subst">{i + <span class="hljs-number">1</span>}</span>/<span class="hljs-subst">{len(train_loader)}</span>], Loss: <span class="hljs-subst">{loss.item():<span class="hljs-number">.4</span>f}</span>"</span>)

    <span class="hljs-comment"># Calculate and print accuracy</span>
    accuracy = <span class="hljs-number">100</span> * total_correct / total_samples
    print(<span class="hljs-string">f"Epoch [<span class="hljs-subst">{epoch + <span class="hljs-number">1</span>}</span>/<span class="hljs-subst">{num_epochs}</span>], Training Accuracy: <span class="hljs-subst">{accuracy:<span class="hljs-number">.2</span>f}</span>%"</span>)

    <span class="hljs-comment"># Calculate and store average loss and accuracy for the epoch</span>
    average_loss = total_loss / len(train_loader)
    losses.append(average_loss)
    accuracies.append(accuracy)

print(<span class="hljs-string">"Training finished."</span>)
</code></pre>
<p>I have just added two new lists and appended the loss and accuracy of each epoch to plot them at the end of this training loop. so, if we run the cell, the training loop starts.</p>
<pre><code class="lang-python">Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.7972</span>
Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.6378</span>
Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.8636</span>
Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.7561</span>
Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.5374</span>
Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.3903</span>
Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.9298</span>
Epoch [<span class="hljs-number">1</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">37.66</span>%
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4052</span>
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.5323</span>
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4231</span>
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4050</span>
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.6354</span>
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.5001</span>
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4958</span>
Epoch [<span class="hljs-number">2</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">47.43</span>%
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4195</span>
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4136</span>
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.3760</span>
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.3983</span>
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4408</span>
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.5327</span>
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4464</span>
Epoch [<span class="hljs-number">3</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">52.01</span>%
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9776</span>
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.3968</span>
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2217</span>
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2327</span>
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2769</span>
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2474</span>
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2011</span>
Epoch [<span class="hljs-number">4</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">55.39</span>%
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.4723</span>
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.3214</span>
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2499</span>
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.0077</span>
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2315</span>
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.3217</span>
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9848</span>
Epoch [<span class="hljs-number">5</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">58.57</span>%
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9484</span>
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.0160</span>
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9097</span>
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.1121</span>
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.0643</span>
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.1222</span>
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9151</span>
Epoch [<span class="hljs-number">6</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">61.42</span>%
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.0603</span>
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9580</span>
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.1307</span>
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9121</span>
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.8515</span>
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.8278</span>
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9549</span>
Epoch [<span class="hljs-number">7</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">64.02</span>%
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.7921</span>
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9545</span>
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.7991</span>
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.0606</span>
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.2592</span>
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9369</span>
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9270</span>
Epoch [<span class="hljs-number">8</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">67.04</span>%
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.7501</span>
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.1043</span>
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.8589</span>
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.1142</span>
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.7783</span>
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9715</span>
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.8937</span>
Epoch [<span class="hljs-number">9</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">69.46</span>%
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">100</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.5709</span>
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">200</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.5713</span>
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">300</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.9997</span>
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">400</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.6164</span>
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">500</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.0069</span>
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">600</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">1.1473</span>
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Step [<span class="hljs-number">700</span>/<span class="hljs-number">782</span>], Loss: <span class="hljs-number">0.6924</span>
Epoch [<span class="hljs-number">10</span>/<span class="hljs-number">10</span>], Training Accuracy: <span class="hljs-number">71.88</span>%
Training finished.
</code></pre>
<p>With the same dataset, however, with different architecture, we have been able to increase the accuracy and decrease loss.</p>
<pre><code class="lang-Python"><span class="hljs-comment"># Save the trained model</span>
torch.save(model.state_dict(), <span class="hljs-string">"advanced_ann_cifar10.pth"</span>)
</code></pre>
<p>In this step, we will be saving the trained model, and <code>.pth</code> is the extension for <code>PyTorch</code> model, <code>.pt</code> can also be used instead. We will talk about loading a saved model in upcoming tutorials and using it to predict non-trained data.</p>
<p>Now, let's plot a line plot for loss and accuracy.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Plotting loss and accuracy</span>
plt.figure(figsize=(<span class="hljs-number">10</span>, <span class="hljs-number">5</span>))
plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>)
plt.plot(losses, label=<span class="hljs-string">'Loss'</span>)
plt.xlabel(<span class="hljs-string">'Epoch'</span>)
plt.ylabel(<span class="hljs-string">'Loss'</span>)
plt.legend()

plt.subplot(<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">2</span>)
plt.plot(accuracies, label=<span class="hljs-string">'Accuracy'</span>, color=<span class="hljs-string">'orange'</span>)
plt.xlabel(<span class="hljs-string">'Epoch'</span>)
plt.ylabel(<span class="hljs-string">'Accuracy (%)'</span>)
plt.legend()

plt.tight_layout()
plt.show()
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1692085649749/cd858893-b493-4a30-b3d0-d3efddeaa1b2.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion">conclusion</h1>
<p>In this Tutorial, I have explained how to build an advanced artificial neural network from scratch with PyTorch. We learnt that with an increase in the number of layers, the overall performance of the model can be enhanced. But let me remind you just to get better results, we will not be increasing more and more layers. There are other ways to do that and we will talk about it in the upcoming days.</p>
]]></content:encoded></item><item><title><![CDATA[PyTorch 101: Image classification with SimpleANN]]></title><description><![CDATA[Introduction
Hello everyone, it's me Bibek Chalise and welcome to my Blog Series. In this tutorial, we will start a new series of tutorials and that will be Computer Vision with PyTorch. PyTorch is one of the machine learning frameworks and is based ...]]></description><link>https://www.chalisebibek.com.np/pytorch-101-image-classification-with-simpleann</link><guid isPermaLink="true">https://www.chalisebibek.com.np/pytorch-101-image-classification-with-simpleann</guid><category><![CDATA[Python]]></category><category><![CDATA[pytorch]]></category><category><![CDATA[AI]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[neural networks]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Mon, 14 Aug 2023 07:07:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1691936005687/b627a885-c253-418a-a5c9-5ac166770bfd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>Hello everyone, it's me Bibek Chalise and welcome to my Blog Series. In this tutorial, we will start a new series of tutorials and that will be Computer Vision with PyTorch. <code>PyTorch</code> is one of the machine learning frameworks and is based on Torch Library, and makes Computer Vision, Natural Language Processing and other machine learning and deep learning tasks easier. So, from today we will focus on using Computer Vision with PyTorch and I am assuming that you know a bit about computer vision background and PyTorch fundamentals like tensors and basic Neural Network stuff. But, if you don't know that too, we will slowly get to know about that stuff too. It's going to be a good series to cover up and finish. In this blog, we will be designing a very simple <code>Artificial Neural Network</code> to classify 10 different classes from very popular <code>CIFAR-10</code> dataset. We will be downloading it from <code>torchvision</code> dataset hub, so there will be no fancy loading of our custom datasets in this particular video. But will do that for sure in upcoming videos. However, before starting the code, we need to understand a bit about how we are going to approach this task.</p>
<h2 id="heading-what-is-ann">What is <code>ANN</code>?</h2>
<p><img src="https://www.techslang.com/wp-content/uploads/2019/10/1111110-e1573505580370.jpg" alt="how neural networks work" />
Figure: Artificial Neural Network architectural diagram [source_image] ANNs (also called Artificial Neural Networks or simply neural networks), are computing systems vaguely inspired by the biological neural networks that learn from the data we feed them. So, in this case, since we will be feeding them images, they will learn patterns in the images and the classes associated with that pattern. </p>
<p><img src="https://production-media.paperswithcode.com/datasets/4fdf2b82-2bc3-4f97-ba51-400322b228b1.png" alt="CIFAR-10 Dataset | Papers With Code" /></p>
<p>This tutorial is also available in <a target="_blank" href="https://youtu.be/xFqqWPwjpgg">Youtube</a> in my youtube Channel.</p>
<p>This is a sample of the dataset, so with each image of airplane we feed in the network, it generates a pattern for aeroplanes like it has two wings, metal like body and any other specific, we let the network decide what pattern to learn and whenever a new image is a feed into the network, it analyzes the pattern in the new test data and aligns with the pattern it previously learnt and gives us a probability result class.</p>
<h2 id="heading-what-next">What next?</h2>
<p>So, I think we have a basic overview of what ANN is and how it will work. But how do we give our image of size 32*32 (Height and weight) coloured (3 channels) images to the Neural Network in such a way that it generalizes a pattern? In this case, what we need is to flatten this image to a single dimension. So, the <code>3 * 32 * 32</code> image of 3 dimensions will be converted to reshaped as <code>3072 * 1</code> dimensional vector. So we will have 3072 nodes as input nodes and will be projected to a definite number of nodes in the hidden layer.</p>
<p>Without further due, let's dive directly into the code. We will continue to know more about the structure along with the code in the video. Also, I am assuming you are well familiar with Google Colab, since we will be using that for this tutorial.</p>
<p>First of all, we need to import important libraries.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">import</span> torchvision
<span class="hljs-keyword">import</span> torchvision.transforms <span class="hljs-keyword">as</span> transforms
<span class="hljs-keyword">import</span> torch.nn <span class="hljs-keyword">as</span> nn
<span class="hljs-keyword">import</span> torch.optim <span class="hljs-keyword">as</span> optim
<span class="hljs-keyword">import</span> torch.nn.functional <span class="hljs-keyword">as</span> F
</code></pre>
<p>The torch is the main library of the PyTorch framework and torchvision is another PyTorch library for all computer vision tasks. Transforms will be used for any kind of data transformation or augmentation in the images, <code>torch.nn</code> will be the core Neural network class of Torch, <code>torch.optim</code> will be for Optimisers and <code>torch.nn.functional</code> has many functions that will be useful as we progress in the series.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Define data transformations</span>
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((<span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>), (<span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>, <span class="hljs-number">0.5</span>))  <span class="hljs-comment"># Normalize images to [-1, 1]</span>
])
</code></pre>
<p>In this cell, what we are trying to do is transforms the image to tensor, since PyTorch requires the values to be a Tensor, (like ndarray of Numpy), and <code>ToTensor</code> converts the image values in the range of <code>0 to 1</code>. Then we normalize them with a <code>mean 0.5</code> and <code>0.5 standard deviation</code> for each channel, Remember we are working with <code>RGB</code> images, right?</p>
<pre><code class="lang-Python"><span class="hljs-comment"># Load CIFAR-10 dataset</span>
dataset = torchvision.datasets.CIFAR10(root=<span class="hljs-string">'./cifar'</span>, train=<span class="hljs-literal">True</span>, transform=transform, download=<span class="hljs-literal">True</span>)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=<span class="hljs-number">64</span>, shuffle=<span class="hljs-literal">True</span>)
</code></pre>
<p>It's funny right, we have described how we will be going to transform our images but haven't loaded the images in our working directory. <code>torchvision.datasets</code> has many working datasets and we have chosen <code>CIFER10</code> and we have downloaded it to <code>./cifer</code> folder with parameter <code>download=True</code>. The <code>train=True</code> downloads only the train dataset from the hub whereas, <code>transform=transform</code> makes all the transformations to the downloaded images as described above. The <code>Dataloader</code> function helps us to feed inputs to the model in the batch of 64 images at one time.</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">SimpleANN</span>(<span class="hljs-params">nn.Module</span>):</span>
    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self</span>):</span>
        super(SimpleANN, self).__init__()
        self.fc1 = nn.Linear(<span class="hljs-number">3</span> * <span class="hljs-number">32</span> * <span class="hljs-number">32</span>, <span class="hljs-number">128</span>)  <span class="hljs-comment"># Flatten the 32x32 RGB images</span>
        self.fc2 = nn.Linear(<span class="hljs-number">128</span>, <span class="hljs-number">64</span>)
        self.fc3 = nn.Linear(<span class="hljs-number">64</span>, <span class="hljs-number">10</span>)  <span class="hljs-comment"># Output layer for 10 classes</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">forward</span>(<span class="hljs-params">self, x</span>):</span>
        x = x.view(<span class="hljs-number">-1</span>, <span class="hljs-number">3</span> * <span class="hljs-number">32</span> * <span class="hljs-number">32</span>)  <span class="hljs-comment"># Flatten the input images</span>
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        <span class="hljs-keyword">return</span> x
</code></pre>
<p>Now the big game time, we define our model. We make a Python class named SimpleANN and it is a subclass of <code>nn.Module</code> class which is the base class for all the Neural Network modules in <code>PyTorch</code>. The <code>__init__(self)</code> method called whenever SimpleANN class is called. So as I mentioned, we are converting the size of our image <code>3 * 32 * 32</code> into a one-dimensional vector and feeding as input to the ANN. Then, we connect that with 128 nodes in the first hidden layer, then that is connected to 64 nodes in the second hidden layer and finally we have 10 nodes as output layer for each of the 10 output classes.</p>
<p>The forward function is the forward pass to carry the input image through all the layers. In the forward layer, we have used <code>Rectified Linear Unit</code> or <code>ReLU</code> has been used which is a popular <code>Activation Function</code> which takes the input and outputs the input in between the maximum of <code>0 or input</code>. So, this keeps non-linearity in the network to improve the performance of the network.</p>
<pre><code class="lang-python">model = SimpleANN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=<span class="hljs-number">0.001</span>, momentum=<span class="hljs-number">0.9</span>)
</code></pre>
<p>Now, we need to call the instance of <code>SimpleANN</code> class and we created a model as its instance. Also, we defined <code>CrossEntropyLoss</code> is our loss function which is used to calculate the loss during each training epoch and <code>SGD</code> or <code>Stochastic Gradient Descent</code> is used as our optimizer to update the weights of our network. So, <code>lr=0.001</code> is the learning rate and defines the rate at which the weights are updated. The <code>momentum</code> parameter is to help us accelerate convergence.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Training loop with accuracy calculation</span>
epochs = <span class="hljs-number">10</span>
<span class="hljs-keyword">for</span> epoch <span class="hljs-keyword">in</span> range(epochs):
    running_loss = <span class="hljs-number">0.0</span>
    correct = <span class="hljs-number">0</span>
    total = <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> i, data <span class="hljs-keyword">in</span> enumerate(dataloader, <span class="hljs-number">0</span>):
        inputs, labels = data
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()

        <span class="hljs-comment"># Calculate accuracy</span>
        _, predicted = torch.max(outputs.data, <span class="hljs-number">1</span>)
        total += labels.size(<span class="hljs-number">0</span>)
        correct += (predicted == labels).sum().item()

        <span class="hljs-keyword">if</span> i % <span class="hljs-number">100</span> == <span class="hljs-number">99</span>:
            print(<span class="hljs-string">f'Epoch <span class="hljs-subst">{epoch + <span class="hljs-number">1</span>}</span>, Mini-batch <span class="hljs-subst">{i + <span class="hljs-number">1</span>}</span>, Loss: <span class="hljs-subst">{running_loss / <span class="hljs-number">100</span>:<span class="hljs-number">.3</span>f}</span>, Accuracy: <span class="hljs-subst">{(<span class="hljs-number">100</span> * correct / total):<span class="hljs-number">.2</span>f}</span>%'</span>)
            running_loss = <span class="hljs-number">0.0</span>

print(<span class="hljs-string">'Finished Training'</span>)
</code></pre>
<p>So, now we are in the training loop. we have chosen <code>epochs=10</code> and in each epoch, we are taking 64 images and their corresponding labels from the data loader. Then, we feed that image to the model which gives us an output and we are only interested in the node which has the maximum probability. So, if the output is the same as the label, we increment it <code>corret+1</code> and we calculate the accuracy. So, in this loop, there are a few things to understand.</p>
<pre><code class="lang-python">    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
</code></pre>
<p>So, here optimizer starts with zero gradients for each epoch and only after the calculation of loss by the difference between predicted output and labels, the backward propagation starts and the optimizer updates the weights as per the loss. We zero out all of our gradients so that they don't accumulate over time and then calculate new ones using a backpropagation algorithm.</p>
<pre><code class="lang-python">Epoch <span class="hljs-number">1</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.306</span>, Accuracy: <span class="hljs-number">54.05</span>%
Epoch <span class="hljs-number">1</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.324</span>, Accuracy: <span class="hljs-number">53.85</span>%
Epoch <span class="hljs-number">1</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.337</span>, Accuracy: <span class="hljs-number">53.52</span>%
Epoch <span class="hljs-number">1</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.318</span>, Accuracy: <span class="hljs-number">53.50</span>%
Epoch <span class="hljs-number">1</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.312</span>, Accuracy: <span class="hljs-number">53.63</span>%
Epoch <span class="hljs-number">1</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.308</span>, Accuracy: <span class="hljs-number">53.85</span>%
Epoch <span class="hljs-number">1</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.311</span>, Accuracy: <span class="hljs-number">53.88</span>%
Epoch <span class="hljs-number">2</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.315</span>, Accuracy: <span class="hljs-number">54.02</span>%
Epoch <span class="hljs-number">2</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.276</span>, Accuracy: <span class="hljs-number">54.66</span>%
Epoch <span class="hljs-number">2</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.287</span>, Accuracy: <span class="hljs-number">54.74</span>%
Epoch <span class="hljs-number">2</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.276</span>, Accuracy: <span class="hljs-number">54.80</span>%
Epoch <span class="hljs-number">2</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.311</span>, Accuracy: <span class="hljs-number">54.75</span>%
Epoch <span class="hljs-number">2</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.268</span>, Accuracy: <span class="hljs-number">54.96</span>%
Epoch <span class="hljs-number">2</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.275</span>, Accuracy: <span class="hljs-number">55.04</span>%
Epoch <span class="hljs-number">3</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.279</span>, Accuracy: <span class="hljs-number">55.28</span>%
Epoch <span class="hljs-number">3</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.265</span>, Accuracy: <span class="hljs-number">55.54</span>%
Epoch <span class="hljs-number">3</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.270</span>, Accuracy: <span class="hljs-number">55.62</span>%
Epoch <span class="hljs-number">3</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.259</span>, Accuracy: <span class="hljs-number">55.79</span>%
Epoch <span class="hljs-number">3</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.263</span>, Accuracy: <span class="hljs-number">55.89</span>%
Epoch <span class="hljs-number">3</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.248</span>, Accuracy: <span class="hljs-number">56.00</span>%
Epoch <span class="hljs-number">3</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.259</span>, Accuracy: <span class="hljs-number">55.91</span>%
Epoch <span class="hljs-number">4</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.223</span>, Accuracy: <span class="hljs-number">56.84</span>%
Epoch <span class="hljs-number">4</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.223</span>, Accuracy: <span class="hljs-number">57.07</span>%
Epoch <span class="hljs-number">4</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.229</span>, Accuracy: <span class="hljs-number">57.10</span>%
Epoch <span class="hljs-number">4</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.253</span>, Accuracy: <span class="hljs-number">56.86</span>%
Epoch <span class="hljs-number">4</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.218</span>, Accuracy: <span class="hljs-number">57.06</span>%
Epoch <span class="hljs-number">4</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.235</span>, Accuracy: <span class="hljs-number">56.99</span>%
Epoch <span class="hljs-number">4</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.252</span>, Accuracy: <span class="hljs-number">56.89</span>%
Epoch <span class="hljs-number">5</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.177</span>, Accuracy: <span class="hljs-number">58.47</span>%
Epoch <span class="hljs-number">5</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.215</span>, Accuracy: <span class="hljs-number">57.97</span>%
Epoch <span class="hljs-number">5</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.214</span>, Accuracy: <span class="hljs-number">58.03</span>%
Epoch <span class="hljs-number">5</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.162</span>, Accuracy: <span class="hljs-number">58.38</span>%
Epoch <span class="hljs-number">5</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.206</span>, Accuracy: <span class="hljs-number">58.28</span>%
Epoch <span class="hljs-number">5</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.215</span>, Accuracy: <span class="hljs-number">58.09</span>%
Epoch <span class="hljs-number">5</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.239</span>, Accuracy: <span class="hljs-number">57.92</span>%
Epoch <span class="hljs-number">6</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.188</span>, Accuracy: <span class="hljs-number">58.33</span>%
Epoch <span class="hljs-number">6</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.169</span>, Accuracy: <span class="hljs-number">58.79</span>%
Epoch <span class="hljs-number">6</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.165</span>, Accuracy: <span class="hljs-number">58.77</span>%
Epoch <span class="hljs-number">6</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.195</span>, Accuracy: <span class="hljs-number">58.73</span>%
Epoch <span class="hljs-number">6</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.184</span>, Accuracy: <span class="hljs-number">58.74</span>%
Epoch <span class="hljs-number">6</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.195</span>, Accuracy: <span class="hljs-number">58.55</span>%
Epoch <span class="hljs-number">6</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.199</span>, Accuracy: <span class="hljs-number">58.46</span>%
Epoch <span class="hljs-number">7</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.172</span>, Accuracy: <span class="hljs-number">59.00</span>%
Epoch <span class="hljs-number">7</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.157</span>, Accuracy: <span class="hljs-number">58.99</span>%
Epoch <span class="hljs-number">7</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.159</span>, Accuracy: <span class="hljs-number">59.08</span>%
Epoch <span class="hljs-number">7</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.158</span>, Accuracy: <span class="hljs-number">59.22</span>%
Epoch <span class="hljs-number">7</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.170</span>, Accuracy: <span class="hljs-number">59.25</span>%
Epoch <span class="hljs-number">7</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.174</span>, Accuracy: <span class="hljs-number">59.15</span>%
Epoch <span class="hljs-number">7</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.159</span>, Accuracy: <span class="hljs-number">59.12</span>%
Epoch <span class="hljs-number">8</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.128</span>, Accuracy: <span class="hljs-number">60.77</span>%
Epoch <span class="hljs-number">8</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.124</span>, Accuracy: <span class="hljs-number">61.12</span>%
Epoch <span class="hljs-number">8</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.133</span>, Accuracy: <span class="hljs-number">60.99</span>%
Epoch <span class="hljs-number">8</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.152</span>, Accuracy: <span class="hljs-number">60.70</span>%
Epoch <span class="hljs-number">8</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.140</span>, Accuracy: <span class="hljs-number">60.70</span>%
Epoch <span class="hljs-number">8</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.156</span>, Accuracy: <span class="hljs-number">60.41</span>%
Epoch <span class="hljs-number">8</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.172</span>, Accuracy: <span class="hljs-number">60.16</span>%
Epoch <span class="hljs-number">9</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.119</span>, Accuracy: <span class="hljs-number">61.19</span>%
Epoch <span class="hljs-number">9</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.120</span>, Accuracy: <span class="hljs-number">60.77</span>%
Epoch <span class="hljs-number">9</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.132</span>, Accuracy: <span class="hljs-number">60.68</span>%
Epoch <span class="hljs-number">9</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.106</span>, Accuracy: <span class="hljs-number">60.84</span>%
Epoch <span class="hljs-number">9</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.125</span>, Accuracy: <span class="hljs-number">60.79</span>%
Epoch <span class="hljs-number">9</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.139</span>, Accuracy: <span class="hljs-number">60.73</span>%
Epoch <span class="hljs-number">9</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.118</span>, Accuracy: <span class="hljs-number">60.79</span>%
Epoch <span class="hljs-number">10</span>, Mini-batch <span class="hljs-number">100</span>, Loss: <span class="hljs-number">1.088</span>, Accuracy: <span class="hljs-number">61.70</span>%
Epoch <span class="hljs-number">10</span>, Mini-batch <span class="hljs-number">200</span>, Loss: <span class="hljs-number">1.074</span>, Accuracy: <span class="hljs-number">61.97</span>%
Epoch <span class="hljs-number">10</span>, Mini-batch <span class="hljs-number">300</span>, Loss: <span class="hljs-number">1.103</span>, Accuracy: <span class="hljs-number">61.71</span>%
Epoch <span class="hljs-number">10</span>, Mini-batch <span class="hljs-number">400</span>, Loss: <span class="hljs-number">1.090</span>, Accuracy: <span class="hljs-number">61.99</span>%
Epoch <span class="hljs-number">10</span>, Mini-batch <span class="hljs-number">500</span>, Loss: <span class="hljs-number">1.121</span>, Accuracy: <span class="hljs-number">61.59</span>%
Epoch <span class="hljs-number">10</span>, Mini-batch <span class="hljs-number">600</span>, Loss: <span class="hljs-number">1.113</span>, Accuracy: <span class="hljs-number">61.45</span>%
Epoch <span class="hljs-number">10</span>, Mini-batch <span class="hljs-number">700</span>, Loss: <span class="hljs-number">1.100</span>, Accuracy: <span class="hljs-number">61.44</span>%
Finished Training
</code></pre>
<p>This is the result of the training loop and at the end of 10 epochs, we achieved a loss of <code>1.1</code> and loss of <code>61.44%</code>. This was the simplest Neural Network designed, so we can tweak over <code>optimizer type</code>, <code>number of epochs</code>, <code>batch_size</code>, and <code>number of layers</code> in the architecture of the ANN. In the next tutorial, we will be looking at the design of a more complex ANN for a new dataset and see how a change in architecture makes an impact on the overall performance of the model. So, until, then I sign off from this tutorial and see you again.</p>
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://pytorch.org/tutorials/">PyTorch Tutorials</a></p>
</li>
<li><p><a target="_blank" href="https://www.techslang.com/how-does-an-artificial-neural-network-work/">source_image</a></p>
</li>
<li><p><a target="_blank" href="https://colab.research.google.com/drive/14w5Y00EYWSZLQ3aC5QM0MpsN70vAsDf7?usp=sharing">ColabNotebook</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Evaluation of GANs model -Part 2]]></title><description><![CDATA[Let's learn more about the feature extraction that we discussed in part - I.
In the last article, we discussed why evaluating GANs model is hard and what are the ways to evaluate it. We concluded that it's better to use the feature distance method ra...]]></description><link>https://www.chalisebibek.com.np/evaluation-of-gans-model-part-2</link><guid isPermaLink="true">https://www.chalisebibek.com.np/evaluation-of-gans-model-part-2</guid><category><![CDATA[GANs]]></category><category><![CDATA[pytorch]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Thu, 05 Jan 2023 13:14:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672924341064/139a4731-5f0b-45cf-8bba-c389e7fe28d6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let's learn more about the feature extraction that we discussed in <a target="_blank" href="https://www.chalisebibek.com.np/evaluation-of-gans-model">part - I.</a></p>
<p>In the last <a target="_blank" href="https://www.chalisebibek.com.np/evaluation-of-gans-model">article</a>, we discussed why evaluating GANs model is hard and what are the ways to evaluate it. We concluded that it's better to use the feature distance method rather than the pixel distance method.</p>
<h1 id="heading-what-is-feature-distance">What is feature distance?</h1>
<p>It is a method of using features extracted from the classifier to differentiate between real and fake images.</p>
<blockquote>
<p>Classifier -&gt; Features -&gt; Differentiate between real and fake.</p>
</blockquote>
<p>We extract the features of images using a pre-trained classifier. The fully connected layers are used for the classification and the CNN part is used for feature extraction. So, why not only use it and eliminate fully connected layers? The max-pooling layers give the features.</p>
<p>It is experimental to know which pooling layers to use because you never know which pooling layer gives the best-needed features.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672893791287/835404e8-2f29-4901-939f-9ecfb222c53e.png" alt class="image--center mx-auto" /></p>
<p>The features are called embeddings as the features are condensed to smaller numbers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672894232961/eb0bb34f-85ad-4839-ba9a-258bf663c179.png" alt class="image--center mx-auto" /></p>
<p>Since the real and fake image embeddings are closer, so we can say, the generator is doing well.</p>
<h1 id="heading-how-to-compare-embeddings">How to compare embeddings?</h1>
<h4 id="heading-frechet-inception-distance-fidhttpsenwikipediaorgwikifrc3a9chetinceptiondistance"><a target="_blank" href="https://en.wikipedia.org/wiki/Fr%C3%A9chet_inception_distance">Frechet Inception Distance (FID)</a></h4>
<p>It can be used to calculate the distance between curves and distributions.</p>
<p>$$d(x,y) = (uX, -uY)^2 + (\sigma X - \sigma Y)^2$$</p>
<p>The mean gives you a sense of the centre and the standard deviation gives how far apart are the distribution.</p>
<h2 id="heading-multivariate-normal-distributions">Multivariate Normal Distributions</h2>
<p>It is the normal distribution over large dimensions.</p>
<p>so, we use covariance as it measures the variance between two distributions.</p>
<p>so, for multivariate normal distribution, we use the following formula.</p>
<p>$$FID = ||uX - uY||^2 + Tr(\sum X + \sum Y - 2 \sqrt{ \sum X \sum y} )$$</p>
<p>Where:</p>
<p>Tr = Trace of a matrix (sum of diagonal)</p>
<p>Real and fake embeddings are two multivariate normal distributions.</p>
<p>Then we can compare real and fake with the FID formula.</p>
<p>X = Real</p>
<p>Y = Fake</p>
<p>$$uX = Mean. of. Real$$</p>
<p> $$ uY = Mean.of.fake$$</p>
<p> $$ \sum X = Covariance.of.Real.embeddings$$</p>
<p> $$ \sum Y = Covariance .of.Fake.Embeddings$$</p>
<p>The lower the FID, the closer the distributions, the better.</p>
<h3 id="heading-shortcomings-of-fid">Shortcomings of FID</h3>
<ul>
<li><p>Use pre-trained inception-v3 moel which may not capture all features</p>
</li>
<li><p>Needs a large sample size</p>
</li>
<li><p>Limited statistics used (Mean and standard deviation)</p>
</li>
<li><p>Slow to run.</p>
</li>
</ul>
<blockquote>
<p><em>There is another method known as inception score which can be used to compare the features. (Readers are advised to study it by themselves, The math too vague to include in this blog.)</em></p>
</blockquote>
<h2 id="heading-sampling-and-truncation">Sampling and Truncation</h2>
<p>Sampling is an important technique. When you use random noise of mean zero and standard deviation of one, you get better quality results but they look like normal images with less variation. However, when you use another random noise you get more diverse images but of less quality. so, there is always a trade-off.</p>
<p>We can conclude that the evaluation of GANs model is very much sample dependent rather than model parameters, so choose wisely how you sample.</p>
<p>So, to reduce the trade-off between diversity and fidelity we can use the truncation technique.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672924060366/21b20575-1f13-43f1-bf57-70d87d43f06e.png" alt class="image--center mx-auto" /></p>
<p>Fig: Truncation of the sample.</p>
<h1 id="heading-thank-you">Thank you!</h1>
]]></content:encoded></item><item><title><![CDATA[Evaluation of GANs Model]]></title><description><![CDATA[For classification in supervised learning, there are labels which make it easy to classify. But since GANs are unsupervised, it is not easy to decode the image generated from a random noise vector as real or fake. There is no clear 1 or 0.
A discrimi...]]></description><link>https://www.chalisebibek.com.np/evaluation-of-gans-model</link><guid isPermaLink="true">https://www.chalisebibek.com.np/evaluation-of-gans-model</guid><category><![CDATA[GANs]]></category><category><![CDATA[Python]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[reading]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Tue, 03 Jan 2023 09:17:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672735993439/80cfae98-bda0-4aed-a437-a3691071df0e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For classification in supervised learning, there are labels which make it easy to classify. But since GANs are unsupervised, it is not easy to decode the image generated from a random noise vector as real or fake. There is no clear 1 or 0.</p>
<p>A discriminator cannot be used for evaluation, because it overfits the generator it's trained with.</p>
<p>Two important properties when it comes to generating images from GANs.</p>
<ol>
<li><p>Fidelity</p>
</li>
<li><p>Diversity</p>
</li>
</ol>
<p>Fidelity deals with:</p>
<p>-&gt; How good are the fake images?</p>
<p>-&gt; How far is the fake image from the real one?</p>
<p>Diversity deals with:</p>
<p>You want a GAN that can generate a variety of different images.</p>
<p>Comparing images</p>
<p>There are two ways to compare real and fake images generated by Generator.</p>
<ol>
<li><p>Pixel Distance</p>
</li>
<li><p>Feature Distance</p>
</li>
</ol>
<p>Pixel Distance</p>
<p><img src="https://static.javatpoint.com/programs/images/program-to-calculate-the-subtraction-of-2-matrices.png" alt="Real - Fake = Abs. Distance" class="image--center mx-auto" /></p>
<p><code>Fig: Real Image - Fake Image = Abs. distance</code></p>
<p>This is not a great measure, if a single pixel is shifted, the abs. the difference will be significantly changed.</p>
<p>Feature Distance</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672737181121/fa54a4a2-8db3-4803-b2ce-504b07b3474a.png" alt="Feature Distance" class="image--center mx-auto" /></p>
<p><code>Fig: Feature Distance</code></p>
<p>With this technique, even a shift in pixel values won't make any difference.</p>
<p>Conclusion:</p>
<p>-&gt; Pixel distance is simple but unreliable.</p>
<p>-&gt;Feature distance uses the higher-level features of an image, making it more reliable.</p>
]]></content:encoded></item><item><title><![CDATA[Devanagari handwriting and letters Generation with Deep Convolution Generative Adversarial Network (DCGAN)]]></title><description><![CDATA[Hello Everyone,
The following two images are generated by Deep Convolution Generative Adversarial Network trained over Devanagari Numbers and Letters.


Fig: generated images
The dataset was taken from GitHub (https://github.com/kcnishan/Nepali_handw...]]></description><link>https://www.chalisebibek.com.np/devanagari-handwriting-and-letters-generation-with-deep-convolution-generative-adversarial-network-dcgan</link><guid isPermaLink="true">https://www.chalisebibek.com.np/devanagari-handwriting-and-letters-generation-with-deep-convolution-generative-adversarial-network-dcgan</guid><category><![CDATA[pytorch]]></category><category><![CDATA[ML]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[GANs]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Tue, 20 Dec 2022 03:47:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671506141303/EFIpJccjl.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Everyone,</p>
<p>The following two images are generated by Deep Convolution Generative Adversarial Network trained over Devanagari Numbers and Letters.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671506573589/hYvHRLfYp.png" alt="Looks Like it 2 right!!" class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671507602185/h0H-mo6MJ.png" alt class="image--center mx-auto" /></p>
<p>Fig: generated images</p>
<p>The dataset was taken from GitHub (<a target="_blank" href="https://github.com/kcnishan/Nepali_handwritten_digits_recognition">https://github.com/kcnishan/Nepali_handwritten_digits_recognition</a>) and the framework used to design this GAN was PyTorch.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671506529994/C_4elEtWV.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671506536809/bUP8QsI0-.png" alt class="image--center mx-auto" /></p>
<p>Fig: Original Dataset</p>
<p>Then, the image path and its label are converted to dataframe using the following script.</p>
<pre><code class="lang-python">img_path = []
label = []
<span class="hljs-keyword">for</span> each <span class="hljs-keyword">in</span> os.listdir(train_path):
  <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> os.listdir(train_path+<span class="hljs-string">'/'</span>+each):
    img_path.append(train_path+<span class="hljs-string">'/'</span>+each+<span class="hljs-string">'/'</span>+x)
    label.append(each)

img_path_test = []
label_test = []
<span class="hljs-keyword">for</span> each <span class="hljs-keyword">in</span> os.listdir(Test_path):
  <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> os.listdir(Test_path+<span class="hljs-string">'/'</span>+each):
    img_path_test.append(Test_path+<span class="hljs-string">'/'</span>+each+<span class="hljs-string">'/'</span>+x)
    label_test.append(each)

df = pd.DataFrame(list(zip(img_path, label)), columns=[<span class="hljs-string">'path'</span>, <span class="hljs-string">'label'</span>])
df_test = pd.DataFrame(list(zip(img_path_test, label_test)), columns=[<span class="hljs-string">'path'</span>, <span class="hljs-string">'label'</span>])
LbE = LabelEncoder()
df.label =  LbE.fit_transform(df.label)
df_test.label =  LbE.fit_transform(df_test.label)
</code></pre>
<p>Then a custom dataset loader was designed which returns tensor values of images and target label. The label was not used for GAN but later was useful for prediction.</p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">LetterDataset</span>(<span class="hljs-params">Dataset</span>):</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, df</span>):</span>
    self.fpath = df.path
    self.label = df.label
    self.transform = T.Compose(
   [T.ToPILImage(),
    T.ToTensor(),  
    T.Normalize(
       mean=[<span class="hljs-number">0.485</span>, <span class="hljs-number">0.456</span>, <span class="hljs-number">0.406</span>],
       std=[<span class="hljs-number">0.229</span>, <span class="hljs-number">0.224</span>, <span class="hljs-number">0.225</span>]
   ),
    T.Grayscale(num_output_channels=<span class="hljs-number">1</span>)
    ]
)

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__len__</span>(<span class="hljs-params">self</span>):</span>
    <span class="hljs-keyword">return</span> len(self.fpath)

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__getitem__</span>(<span class="hljs-params">self, ix</span>):</span>
    img = self.fpath[ix]
    target = self.label[ix]
    img = cv2.imread(img)
    im = self.transform(img)
    <span class="hljs-keyword">return</span> torch.tensor(im/<span class="hljs-number">255</span>).to(device).float(), torch.tensor(target).long().to(device)
</code></pre>
<p>BCELoss is used as the loss algorithm and Adam as the optimiser. As per the beta, the optimiser parameters such as beta1, and beta2 values are taken as 0.5,0.999 and the learning rate as 0.0002. The models are trained at 50 epochs.</p>
<pre><code class="lang-python">image_size = <span class="hljs-number">32</span>
batch_size = <span class="hljs-number">32</span>
z_dim =<span class="hljs-number">100</span>
beta1 = <span class="hljs-number">0.5</span>
lr=<span class="hljs-number">0.0002</span>
epochs = <span class="hljs-number">50</span>
sample_size=<span class="hljs-number">8</span>
</code></pre>
<p>The initial size of images is 32 * 32 and no changes are done with the image height and width but all images are normalised to ( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] )</p>
<pre><code class="lang-python">(
       mean=[<span class="hljs-number">0.485</span>, <span class="hljs-number">0.456</span>, <span class="hljs-number">0.406</span>],
       std=[<span class="hljs-number">0.229</span>, <span class="hljs-number">0.224</span>, <span class="hljs-number">0.225</span>]
   )
</code></pre>
<p>The basis GAN architecture is:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">weights_init</span>(<span class="hljs-params">m</span>):</span>
               classname = m.__class__.__name__
               <span class="hljs-keyword">if</span> classname.find(<span class="hljs-string">'Conv'</span>) != <span class="hljs-number">-1</span>:
                   nn.init.normal_(m.weight.data, <span class="hljs-number">0.0</span>, <span class="hljs-number">0.02</span>)
               <span class="hljs-keyword">elif</span> classname.find(<span class="hljs-string">'BatchNorm'</span>) != <span class="hljs-number">-1</span>:
                   nn.init.normal_(m.weight.data, <span class="hljs-number">1.0</span>, <span class="hljs-number">0.02</span>)
                   nn.init.constant_(m.bias.data, <span class="hljs-number">0</span>)

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Generator</span>(<span class="hljs-params">nn.Module</span>):</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self</span>):</span>
    super(Generator,self).__init__()
    self.model = nn.Sequential(
        nn.ConvTranspose2d(<span class="hljs-number">100</span>,<span class="hljs-number">64</span>*<span class="hljs-number">8</span>,<span class="hljs-number">4</span>,<span class="hljs-number">1</span>,<span class="hljs-number">0</span>,bias=<span class="hljs-literal">False</span>,),
        nn.BatchNorm2d(<span class="hljs-number">64</span>*<span class="hljs-number">8</span>),
        nn.ReLU(<span class="hljs-literal">True</span>),
        nn.ConvTranspose2d(<span class="hljs-number">64</span>*<span class="hljs-number">8</span>,<span class="hljs-number">64</span>*<span class="hljs-number">4</span>,<span class="hljs-number">4</span>,<span class="hljs-number">2</span>,<span class="hljs-number">1</span>,bias=<span class="hljs-literal">False</span>),
        nn.BatchNorm2d(<span class="hljs-number">64</span>*<span class="hljs-number">4</span>),
        nn.ReLU(<span class="hljs-literal">True</span>),
        nn.ConvTranspose2d( <span class="hljs-number">64</span>*<span class="hljs-number">4</span>,<span class="hljs-number">64</span>*<span class="hljs-number">2</span>,<span class="hljs-number">4</span>,<span class="hljs-number">2</span>,<span class="hljs-number">1</span>,bias=<span class="hljs-literal">False</span>),
        nn.BatchNorm2d(<span class="hljs-number">64</span>*<span class="hljs-number">2</span>),
        nn.ReLU(<span class="hljs-literal">True</span>),
        nn.ConvTranspose2d(<span class="hljs-number">64</span>*<span class="hljs-number">2</span>,<span class="hljs-number">1</span>,<span class="hljs-number">4</span>,<span class="hljs-number">2</span>,<span class="hljs-number">1</span>,bias=<span class="hljs-literal">False</span>),
        nn.Tanh()
        )
    self.apply(weights_init)
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">forward</span>(<span class="hljs-params">self,input</span>):</span> <span class="hljs-keyword">return</span> self.model(input)

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Discriminator</span>(<span class="hljs-params">nn.Module</span>):</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self </span>):</span>
    super(Discriminator, self).__init__()
    self.disc  = nn.Sequential(

        <span class="hljs-comment">#input size=32*32</span>
        nn.Conv2d(<span class="hljs-number">1</span>, <span class="hljs-number">64</span>, kernel_size=<span class="hljs-number">4</span>, stride=<span class="hljs-number">2</span>, padding=<span class="hljs-number">1</span>),
          <span class="hljs-comment">#16*16</span>
        nn.LeakyReLU(<span class="hljs-number">0.2</span>),
        disc_layer(<span class="hljs-number">64</span>, <span class="hljs-number">64</span>*<span class="hljs-number">2</span>, <span class="hljs-number">4</span>,<span class="hljs-number">2</span>,<span class="hljs-number">1</span>), <span class="hljs-comment">#8x8</span>
        disc_layer(<span class="hljs-number">64</span>*<span class="hljs-number">2</span>, <span class="hljs-number">64</span>*<span class="hljs-number">4</span>, <span class="hljs-number">4</span>,<span class="hljs-number">2</span>,<span class="hljs-number">1</span>),<span class="hljs-comment">#4x4</span>
        nn.Conv2d(<span class="hljs-number">64</span>*<span class="hljs-number">4</span>, <span class="hljs-number">1</span>, kernel_size=<span class="hljs-number">4</span>,stride=<span class="hljs-number">2</span>,padding=<span class="hljs-number">0</span>),<span class="hljs-comment">#1x1</span>
        nn.Sigmoid()
    )

    self.apply(weights_init)
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">forward</span>(<span class="hljs-params">self, input</span>):</span>
    <span class="hljs-keyword">return</span> self.disc(input)
</code></pre>
<p>The training script for both Generator and Discriminator is:</p>
<pre><code class="lang-python">
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">train_disc</span>(<span class="hljs-params">optimizer,real_data, fake_data</span>):</span>
               optimizer.zero_grad()
               prediction_real = disc(real_data)
               error_real = loss_fn(prediction_real.squeeze(), \\
                                 torch.ones(len(real_data)).to(device))
               error_real.backward()
               prediction_fake = disc(fake_data)
               error_fake = loss_fn(prediction_fake.squeeze(), \\
                                 torch.zeros(len(fake_data)).to(device))
               error_fake.backward()
               optimizer.step()
               <span class="hljs-keyword">return</span> error_real + error_fake

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">train_gen</span>(<span class="hljs-params">optimizer,fake_data</span>):</span>
               optimizer.zero_grad()
               prediction = disc(fake_data)
               error = loss_fn(prediction.squeeze(), \\
                            torch.ones(len(fake_data)).to(device))
               error.backward()
               optimizer.step()
               <span class="hljs-keyword">return</span> error

<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(<span class="hljs-number">50</span>):
  loss_g = <span class="hljs-number">0.0</span>
  loss_d = <span class="hljs-number">0.0</span>
  print(<span class="hljs-string">f'Epoch: _________*****<span class="hljs-subst">{i}</span>*****_______'</span>)
  <span class="hljs-keyword">for</span> ix, batch <span class="hljs-keyword">in</span> tqdm.tqdm(enumerate((trn_ldr))):
    x, y = batch
    x = x.to(device)
    b_size = len(x)
    data_fake = gen(create_noise(b_size, z_dim)).detach()
    data_real = x
    loss_d +=train_disc(optimizer_d, data_real, data_fake)
    data_fake = gen(create_noise(b_size, z_dim))
    loss_g += train_gen(optimizer_g,data_fake)
  generated_image = gen(noise).cpu().detach()
  plt.imshow(generated_image[<span class="hljs-number">0</span>].reshape(<span class="hljs-number">32</span>,<span class="hljs-number">32</span>))
  plt.show()
  print(loss_g, loss_d)
  epoch_loss_g = (loss_g / ix)
  epoch_loss_d = loss_d / ix
  losses_g.append(epoch_loss_g)
  losses_d.append(epoch_loss_d)
  print(<span class="hljs-string">f'Epoch: _________*****<span class="hljs-subst">{i}</span> Generator Training Loss : <span class="hljs-subst">{epoch_loss_g}</span> *****_______'</span>)
  print(<span class="hljs-string">f'Epoch: _________*****<span class="hljs-subst">{i}</span> Discriminator Training Loss : <span class="hljs-subst">{epoch_loss_d}</span> *****_______'</span>)
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671506497749/rrDCHP0-X.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671507656470/PToSuKX4G.png" alt class="image--center mx-auto" /></p>
<p>Along with generation, I have also made a CNN model which predicts the letters and consonants. The accuracy is set as a metric to determine the model worthiness and CrossEntropyLoss is the loss metric.</p>
<pre><code class="lang-python">
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">conv_layer</span>(<span class="hljs-params">ni, no, kernel_size, max_pool_num=<span class="hljs-number">2</span>, stride=<span class="hljs-number">1</span>,</span>):</span>
  <span class="hljs-keyword">return</span> nn.Sequential(
      nn.Conv2d(ni, no, kernel_size, stride),
      nn.BatchNorm2d(no),
      nn.LeakyReLU(<span class="hljs-number">0.2</span>),
      nn.MaxPool2d(max_pool_num)
  )

<span class="hljs-keyword">from</span> torch.nn.modules.activation <span class="hljs-keyword">import</span> Softmax
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_model</span>():</span>
  model = nn.Sequential(
      conv_layer(<span class="hljs-number">1</span>,<span class="hljs-number">8</span>,<span class="hljs-number">2</span>),
      conv_layer(<span class="hljs-number">8</span>,<span class="hljs-number">16</span>, <span class="hljs-number">2</span>),
      conv_layer(<span class="hljs-number">16</span>, <span class="hljs-number">64</span>, <span class="hljs-number">2</span>),
      nn.Flatten(),
      nn.Linear(<span class="hljs-number">64</span>*<span class="hljs-number">3</span>*<span class="hljs-number">3</span>,<span class="hljs-number">192</span>),
      nn.ReLU(),
      nn.Linear(<span class="hljs-number">192</span>, <span class="hljs-number">64</span>),
      nn.ReLU(),
      nn.Linear(<span class="hljs-number">64</span>, <span class="hljs-number">36</span>),
      nn.Softmax()
  ).to(device)
  loss_fn = nn.CrossEntropyLoss()
  optimizer = torch.optim.Adam(model.parameters(), lr=<span class="hljs-number">1e-3</span>)
  <span class="hljs-keyword">return</span> model, loss_fn, optimizer
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671506398532/Z0iz0y6kx.png" alt="Training and validation, loss and accuracy" class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671506434026/C2VPkTqtS.png" alt="Prediction function" class="image--center mx-auto" /></p>
<p><strong>One Fun Fact:</strong></p>
<p>In the dataset, there was a problem, a folder was inside the image folder, which was not usual and manual looking at all datasets was not possible. So, I made a script to check the faulty folder and removed it. Took a little time to do but in the end, it was an interesting task to do. After all, when you have the power of python, it becomes easy.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> os <span class="hljs-keyword">import</span> listdir
<span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image

count= <span class="hljs-number">0</span>
<span class="hljs-keyword">for</span> each <span class="hljs-keyword">in</span> os.listdir(Test_path):
  <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> os.listdir(Test_path+<span class="hljs-string">'/'</span>+each):
    <span class="hljs-keyword">if</span> x.endswith(<span class="hljs-string">'.png'</span>):
      <span class="hljs-keyword">try</span>:
        img = Image.open(Test_path+<span class="hljs-string">'/'</span>+each+<span class="hljs-string">'/'</span>+x)
        img.verify()
      <span class="hljs-keyword">except</span>:
        print(each+<span class="hljs-string">'/'</span>+x)
    <span class="hljs-keyword">else</span>:
      print(each+<span class="hljs-string">'/'</span>+x)
</code></pre>
<p>Future works:</p>
<p>→ Go for transfer learning in the prediction model.</p>
<p>→ Build better Discriminator and Generator models.</p>
<p>→ Try conditional GAN</p>
<p>→ Build an interface where people can generate images as well as classify that generated images.</p>
<p>The repository for the full project is:</p>
<p><a target="_blank" href="https://github.com/bibekebib/Nepali-Letters-and-Numbers-classification-generation">https://github.com/bibekebib/Nepali-Letters-and-Numbers-classification-generation</a></p>
]]></content:encoded></item><item><title><![CDATA[Impact of Scaling in Accuracy]]></title><description><![CDATA[Scaling a dataset is one of the major step of Data Pre-Processing, It is done to reduce the range of data variables. When it comes to image, the minimum - maximum possible value range is always 0-255, that means 255 is the maximum value. So the best ...]]></description><link>https://www.chalisebibek.com.np/impact-of-scaling-in-accuracy</link><guid isPermaLink="true">https://www.chalisebibek.com.np/impact-of-scaling-in-accuracy</guid><category><![CDATA[Python]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[python projects]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Mon, 25 Apr 2022 16:45:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650904890491/eres88wDi.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650904978998/M6VQicQMi.png" alt="image.png" /></p>
<p>Scaling a dataset is one of the major step of Data Pre-Processing, It is done to reduce the range of data variables. When it comes to image, the minimum - maximum possible value range is always 0-255, that means 255 is the maximum value. So the best way to scale down image array value is to divide it by the maximum value. So the range will always be in between 0-1.</p>
<p>Scaling down the input variables will keep in such a range that, it will be easier for the model to work with, will take less time. </p>
<p>Here, for the given example we have taken Fashion MNIST dataset described in torch.datasets module. </p>
<p>A brief about the dataset, it has 10 classes of wear-items, with total of 60000 data points equally distributed as 6000 each class. So, the dataset is balanced, we dont have to look for that. </p>
<p>First we have defined the program to train, the dataset without scaling down the dataset. More than Code Structure, We are more interested to know the difference scaling of dataset that can come.</p>
<p>I have in short commented out the code structure too. </p>
<pre><code class="lang-python"><span class="hljs-comment"># Importing all required libraries</span>

<span class="hljs-keyword">from</span> torch.utils.data <span class="hljs-keyword">import</span> Dataset, DataLoader
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">import</span> torch.nn <span class="hljs-keyword">as</span> nn
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
%matplotlib inline
device = <span class="hljs-string">'cuda'</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">'cpu'</span>
<span class="hljs-keyword">from</span> torchvision <span class="hljs-keyword">import</span> datasets

<span class="hljs-comment"># Here we downloaded the datset to data_folder, with train=True indicating, it is for training purpose. </span>
data_folder = <span class="hljs-string">'/data/'</span>

fmnist = datasets.FashionMNIST(data_folder, download=<span class="hljs-literal">True</span>, train=<span class="hljs-literal">True</span>) 
tr_images = fmnist.data
tr_targets = fmnist.targets


<span class="hljs-comment"># Here, we have extended Dataset Class to define dataset the way we wanted, More on this on upcoming tutorials</span>


<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">FMNISTDataset</span> (<span class="hljs-params">Dataset</span>):</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, x, y</span>):</span>
    x = x.float()
    x = x.view(<span class="hljs-number">-1</span>, <span class="hljs-number">28</span>*<span class="hljs-number">28</span>) <span class="hljs-comment">#Flattern the input 28*28 image</span>
    self.x, self.y = x, y
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__getitem__</span>(<span class="hljs-params">self, ix</span>):</span>
    x, y = self.x[ix], self.y[ix]
    <span class="hljs-keyword">return</span> x.to(device), y.to(device)

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__len__</span>(<span class="hljs-params">self</span>):</span>
    <span class="hljs-keyword">return</span> len(self.x)

<span class="hljs-comment"># Here we simple, loaded dataset defined above, using DataLoader Module in batch_size of 32.</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_data</span>():</span>
  train = FMNISTDataset(tr_images, tr_targets)
  trn_dl = DataLoader(train, batch_size=<span class="hljs-number">32</span>, shuffle=<span class="hljs-literal">True</span>)
  <span class="hljs-keyword">return</span> trn_dl


<span class="hljs-comment">#   Defining the model</span>

<span class="hljs-keyword">from</span> torch.optim <span class="hljs-keyword">import</span> SGD
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_model</span>():</span>
  model = nn.Sequential(
      nn.Linear(<span class="hljs-number">28</span>*<span class="hljs-number">28</span>,<span class="hljs-number">1000</span>),
      nn.ReLU(),
      nn.Linear(<span class="hljs-number">1000</span>,<span class="hljs-number">10</span>)
  ).to(device)

  loss_fn = nn.CrossEntropyLoss()
  optimizer = SGD(model.parameters(), lr = <span class="hljs-number">1e-2</span>)
  <span class="hljs-keyword">return</span> model, loss_fn, optimizer

<span class="hljs-comment">#Training the data in defined model</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">train_batch</span>(<span class="hljs-params">x, y, model, opt, loss_fn</span>):</span>
  model.train()
  prediction = model(x)
  batch_loss = loss_fn(prediction, y)
  batch_loss.backward()
  opt.step()
  opt.zero_grad()
  <span class="hljs-keyword">return</span> batch_loss.item()

<span class="hljs-comment">#For calculating accuracy, here @torch.no_grad() used to define that, we dont calculate gradient while testing.</span>

<span class="hljs-meta">@torch.no_grad()</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">accuracy</span>(<span class="hljs-params">x, y, model</span>):</span>
  model.eval()
  prediction = model(x)
  max_values, argmaxes = prediction.max(<span class="hljs-number">-1</span>)
  is_correct = argmaxes ==y
  <span class="hljs-keyword">return</span> is_correct.cpu().numpy().tolist()

<span class="hljs-comment">#Running Model for training and testing in number of epochs</span>


train_dl = get_data()
model, loss_fn, optimizer = get_model()
losses, accuricies = [], []
<span class="hljs-keyword">for</span> epoch <span class="hljs-keyword">in</span> range(<span class="hljs-number">5</span>):
  print(epoch)
  epoch_losses, epoch_accuricies = [], []
  <span class="hljs-keyword">for</span> ix, batch <span class="hljs-keyword">in</span> enumerate(iter(train_dl)):
    x, y = batch
    batch_loss = train_batch(x, y, model, optimizer, loss_fn)
    epoch_losses.append(batch_loss)
  epoch_loss = np.array(epoch_losses).mean()
  <span class="hljs-keyword">for</span> ix, batch <span class="hljs-keyword">in</span> enumerate(iter(train_dl)):
    x, y  =batch
    is_correct = accuracy(x, y, model)
    epoch_accuricies.extend(is_correct)
  epoch_accuracy = np.mean(epoch_accuricies)
  losses.append(epoch_loss)
  accuricies.append(epoch_accuracy)

<span class="hljs-comment">#For Plotting purpose. </span>

epochs = np.arange(<span class="hljs-number">5</span>)+<span class="hljs-number">1</span>
plt.figure(figsize=(<span class="hljs-number">20</span>,<span class="hljs-number">5</span>))
plt.subplot(<span class="hljs-number">121</span>)
plt.title(<span class="hljs-string">'Loss value over increasing epochs'</span>)
plt.plot(epochs, losses, label=<span class="hljs-string">'Training Loss'</span>)
plt.legend()
plt.subplot(<span class="hljs-number">122</span>)
plt.title(<span class="hljs-string">'Accuracy value over increasing epochs'</span>)
plt.plot(epochs, accuricies, label=<span class="hljs-string">'Training Accuracy'</span>)
plt.gca().set_yticklabels([<span class="hljs-string">'{:.0f}%'</span>.format(x*<span class="hljs-number">100</span>) \
 <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> plt.gca().get_yticks()])
plt.legend()
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650904594892/9KT_w80C4.png" alt="image.png" /></p>
<p>Here, in above plot we can see that the loss has reduced to a point where it has saturated but, the accuracy seems to be just around 13%, that's not what we want our model to perform right. So, we tweak just single hyperparameter, scaling down the dataset range. Okay, I agree, you don't want to call it hyperparameter tuning, we don't call it so. </p>
<p>As mentioned above, we divide the input by it's maximum possible value, i.e. 255.</p>
<p>The changed code will be </p>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">FMNISTDataset</span> (<span class="hljs-params">Dataset</span>):</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, x, y</span>):</span>

    <span class="hljs-comment">#The changed code starts here</span>
    x = x.float()/<span class="hljs-number">255</span>
    <span class="hljs-comment">#The changed code ends here</span>

    x = x.view(<span class="hljs-number">-1</span>, <span class="hljs-number">28</span>*<span class="hljs-number">28</span>)
    self.x, self.y = x, y
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__getitem__</span>(<span class="hljs-params">self, ix</span>):</span>
    x, y = self.x[ix], self.y[ix]
    <span class="hljs-keyword">return</span> x.to(device), y.to(device)
</code></pre>
<p>Now, lets Rerun the code and see how much change it makes.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> torch.utils.data <span class="hljs-keyword">import</span> Dataset, DataLoader
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">import</span> torch.nn <span class="hljs-keyword">as</span> nn
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
<span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
%matplotlib inline
device = <span class="hljs-string">'cuda'</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">'cpu'</span>
<span class="hljs-keyword">from</span> torchvision <span class="hljs-keyword">import</span> datasets


data_folder = <span class="hljs-string">'/data/'</span>

fmnist = datasets.FashionMNIST(data_folder, download=<span class="hljs-literal">True</span>, train=<span class="hljs-literal">True</span>)
tr_images = fmnist.data
tr_targets = fmnist.targets


<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">FMNISTDataset</span> (<span class="hljs-params">Dataset</span>):</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, x, y</span>):</span>

    <span class="hljs-comment">#The changed code starts here</span>
    x = x.float()/<span class="hljs-number">255</span>
    <span class="hljs-comment">#The changed code ends here</span>

    x = x.view(<span class="hljs-number">-1</span>, <span class="hljs-number">28</span>*<span class="hljs-number">28</span>)
    self.x, self.y = x, y
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__getitem__</span>(<span class="hljs-params">self, ix</span>):</span>
    x, y = self.x[ix], self.y[ix]
    <span class="hljs-keyword">return</span> x.to(device), y.to(device)

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__len__</span>(<span class="hljs-params">self</span>):</span>
    <span class="hljs-keyword">return</span> len(self.x)



<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_data</span>():</span>
  train = FMNISTDataset(tr_images, tr_targets)
  trn_dl = DataLoader(train, batch_size=<span class="hljs-number">32</span>, shuffle=<span class="hljs-literal">True</span>)
  <span class="hljs-keyword">return</span> trn_dl



<span class="hljs-keyword">from</span> torch.optim <span class="hljs-keyword">import</span> SGD
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_model</span>():</span>
  model = nn.Sequential(
      nn.Linear(<span class="hljs-number">28</span>*<span class="hljs-number">28</span>,<span class="hljs-number">1000</span>),
      nn.ReLU(),
      nn.Linear(<span class="hljs-number">1000</span>,<span class="hljs-number">10</span>)
  ).to(device)

  loss_fn = nn.CrossEntropyLoss()
  optimizer = SGD(model.parameters(), lr = <span class="hljs-number">1e-2</span>)
  <span class="hljs-keyword">return</span> model, loss_fn, optimizer


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">train_batch</span>(<span class="hljs-params">x, y, model, opt, loss_fn</span>):</span>
  model.train()
  prediction = model(x)
  batch_loss = loss_fn(prediction, y)
  batch_loss.backward()
  opt.step()
  opt.zero_grad()
  <span class="hljs-keyword">return</span> batch_loss.item()


<span class="hljs-meta">@torch.no_grad()</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">accuracy</span>(<span class="hljs-params">x, y, model</span>):</span>
  model.eval()
  prediction = model(x)
  max_values, argmaxes = prediction.max(<span class="hljs-number">-1</span>)
  is_correct = argmaxes ==y
  <span class="hljs-keyword">return</span> is_correct.cpu().numpy().tolist()



train_dl = get_data()
model, loss_fn, optimizer = get_model()
losses, accuricies = [], []
<span class="hljs-keyword">for</span> epoch <span class="hljs-keyword">in</span> range(<span class="hljs-number">5</span>):
  print(epoch)
  epoch_losses, epoch_accuricies = [], []
  <span class="hljs-keyword">for</span> ix, batch <span class="hljs-keyword">in</span> enumerate(iter(train_dl)):
    x, y = batch
    batch_loss = train_batch(x, y, model, optimizer, loss_fn)
    epoch_losses.append(batch_loss)
  epoch_loss = np.array(epoch_losses).mean()
  <span class="hljs-keyword">for</span> ix, batch <span class="hljs-keyword">in</span> enumerate(iter(train_dl)):
    x, y  =batch
    is_correct = accuracy(x, y, model)
    epoch_accuricies.extend(is_correct)
  epoch_accuracy = np.mean(epoch_accuricies)
  losses.append(epoch_loss)
  accuricies.append(epoch_accuracy)



epochs = np.arange(<span class="hljs-number">5</span>)+<span class="hljs-number">1</span>
plt.figure(figsize=(<span class="hljs-number">20</span>,<span class="hljs-number">5</span>))
plt.subplot(<span class="hljs-number">121</span>)
plt.title(<span class="hljs-string">'Loss value over increasing epochs'</span>)
plt.plot(epochs, losses, label=<span class="hljs-string">'Training Loss'</span>)
plt.legend()
plt.subplot(<span class="hljs-number">122</span>)
plt.title(<span class="hljs-string">'Accuracy value over increasing epochs'</span>)
plt.plot(epochs, accuricies, label=<span class="hljs-string">'Training Accuracy'</span>)
plt.gca().set_yticklabels([<span class="hljs-string">'{:.0f}%'</span>.format(x*<span class="hljs-number">100</span>) \
 <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> plt.gca().get_yticks()])
plt.legend()
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650904644063/_VwSLXCrv.png" alt="image.png" /></p>
<p>Wow, look at the accuracy, its around 85%, from 13% to 85% just by scaling down the input.</p>
<p>But what could be the reason for such drastic increase in accuracy merely scaling down the input. Lets, see the math behind this.</p>
<p>We know, </p>
<p>  sigmoid =  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650903480483/XlwUqZE1G.png" alt="image.png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650903841039/yZSiFodt-.png" alt="image.png" />
      source: Modern Computer Vision with PyTorch</p>
<p>In the left hand side, when the input is 255 and the weight is more than or equal to 0.1, there is no change in the sigmoid output. Similarly the change was also so less when the weight was extremely low.</p>
<p>The reason is that exponential of the large negative values is very close to 0. But in the right hand side, since the input is 1, we can see change in the sigmoid output.</p>
<p>Scaling the input dataset so that it contains a much smaller range of values generally helps in achieving better model accuracy.</p>
<p><a target="_blank" href="https://github.com/bibekebib/Pytorch-Tutorials-Code-Base/blob/main/Impact_of_Scaling_in_Accuracy.ipynb">GitHub</a></p>
]]></content:encoded></item><item><title><![CDATA[Learning PyTorch fundamental Neural Network Structure]]></title><description><![CDATA[First Lets import all the requirements that is needed for building the basic architecture of Neural Network in PyTorch. If you haven't yet installed PyTorch, i strongly suggest you to install it from here (PyTorch Official Site) cause I wont be teach...]]></description><link>https://www.chalisebibek.com.np/learning-pytorch-fundamental-neural-network-structure</link><guid isPermaLink="true">https://www.chalisebibek.com.np/learning-pytorch-fundamental-neural-network-structure</guid><category><![CDATA[Python]]></category><category><![CDATA[Deep Learning]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Sun, 24 Apr 2022 03:53:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650769575740/8Ntiq6I90.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650767837435/Dcg7eOfko.png" alt="image.png" /></p>
<p>First Lets import all the requirements that is needed for building the basic architecture of Neural Network in PyTorch. If you haven't yet installed PyTorch, i strongly suggest you to install it from here (<a target="_blank" href="https://pytorch.org/">PyTorch Official Site</a>) cause I wont be teaching it in details.</p>
<p>Plus, we won't talking much about PyTorch fundamentals, like tensors and other operations, but if you want it, I will make separate Tutorial on that. But in this series, I assume you are pretty familiar with basic PyTorch , numpy and python and we will continue with that assumptions. So much to cover, lets see how far we can go on this.</p>
<p>The first task is to import the libraries we need for the overall task, PyTorch is the must at first and if you need something else in the future, we will import requirements in subsequent cells.</p>
<pre><code class="lang-python"><span class="hljs-comment"># importing libraries</span>
<span class="hljs-keyword">import</span> torch
</code></pre>
<p>I have taken a toy dataset for less complications, In future we may take a real life dataset but for now lets stick with this one.</p>
<pre><code class="lang-python">x  = [[<span class="hljs-number">1</span>,<span class="hljs-number">2</span>],[<span class="hljs-number">3</span>,<span class="hljs-number">4</span>],[<span class="hljs-number">5</span>,<span class="hljs-number">6</span>],[<span class="hljs-number">7</span>,<span class="hljs-number">8</span>]]
y = [ [<span class="hljs-number">3</span>],[<span class="hljs-number">7</span>],[<span class="hljs-number">11</span>],[<span class="hljs-number">15</span>]]
</code></pre>
<p>the next task is to convert it into tensor which is the building block of PyTorch library, its like numpy ndarray, but not exactly the same.</p>
<pre><code class="lang-python">x = torch.tensor(x).float()
y = torch.tensor(y).float()
</code></pre>
<p>Like i mentioned above, tensor and numpy ndarray are same, but different, the difference can be seen when it comes to execution. PyTorch tensors can be executed in GPU while numpy array doesn't support execution in GPU, plus because of threading, it takes less time for PyTorch tensors to execute even in CPU than to numpy array. So, if we have GPU available, we will be using it in full extend.</p>
<p>Its difficult to afford GPU by us, so we will be using Free GOOGLE COLAB and enable GPU there.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> torch.cuda <span class="hljs-keyword">import</span> is_available
device = <span class="hljs-string">'cuda'</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">'cpu'</span>

print(device)
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650768241652/pl_jxtdKc.png" alt="image.png" /></p>
<p>The default in Colab is CPU, so if you want to change to GPU, navigate through Runtime and change the runtime type to GPU.</p>
<pre><code class="lang-python">X = x.to(device)
Y = y.to(device)
</code></pre>
<p>Now, we are into the basis foundation of Neural Network. Lets learn each step, with explanation of each step.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> torch.nn <span class="hljs-keyword">as</span> nn 
<span class="hljs-comment"># torch.nn is the class where everything about neural network resides.</span>
</code></pre>
<pre><code class="lang-python"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Nnet</span>(<span class="hljs-params">nn.Module</span>):</span>
  <span class="hljs-comment">#inheriting the class torch.nn.Module into Nnet It is compulsory to inherit from nn.Module</span>
  <span class="hljs-comment"># as it is the base class for all NN.</span>
  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self</span>):</span> <span class="hljs-comment">#making all the initializations of all components nn.Module</span>
    super().__init__()
    <span class="hljs-comment"># super()__init__() make sure that the class completely inherit nn.Module</span>
    <span class="hljs-comment">#with this, we can completely take advantage of pre-built functionlaties of nn.Module</span>


    <span class="hljs-comment">#define Layers in the Neural Network</span>

    self.input_to_hidden_layer = nn.Linear(<span class="hljs-number">2</span>,<span class="hljs-number">8</span>)
    self.hidden_layer_activation = nn.ReLU()
    self.hidden_to_output = nn.Linear(<span class="hljs-number">8</span>,<span class="hljs-number">1</span>)


    <span class="hljs-comment">#defining Forward Propgation</span>

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">forward</span> (<span class="hljs-params">self, x</span>):</span>
      x = self.input_to_hidden_layer(x)
      x = self.hidden_layer_activation(x)
      x = self.hidden_to_output(x)
      <span class="hljs-keyword">return</span> x
</code></pre>
<p>here we are defining 3 layers, input layer, hidden layer and output layer, with activation in the hidden layer.</p>
<p>If we look closely, we can see we have used nn.Linear(2,8) and nn.Linear(8,1) it means, the first parameter is the number of input features to the node and the second is the number of output features from the node. This means, as in our dataset, we will be sending 2 features into the node and it will output 8 output features in the hidden layer.</p>
<p>The hidden layer also comes up with activation layers, in brief the activation layers makes sure either to fire or not to the node.</p>
<p>Here, we have used ReLU activation function, which stands for Rectified Linear Unit, The other popular activation function are</p>
<ol>
<li>sigmoid</li>
<li>SoftMax</li>
<li>Tanh</li>
</ol>
<p>The forward function is for defining forward propogation, the name "FORWARD" is compulosory as it is reserved word to define forward propagation. With other name, it would create error.</p>
<p>Lets create an instance of NNet class as mynet.</p>
<p>Also, we will look how the randomly initialized weights would look like.</p>
<pre><code class="lang-python">mynet = Nnet().to(device)
<span class="hljs-comment"># taking everything to the device is compulory if we want to utilize GPU.</span>
</code></pre>
<pre><code class="lang-python">mynet.input_to_hidden_layer.weight
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650769500390/jZbg9nd7r.png" alt="image.png" />
<a target="_blank" href="Upload failed. Please re-upload the image">image.png</a></p>
<p>Note, everytime you run the above code, the weights initialized will be different, if you want to have same, you have to specify the seed using manual seed method in torch as torch.manual_seed(42).</p>
<p>Now, lets define the loss function for our model. We will be using mean square loss in our case, the other available prominent loss functions can be</p>
<ul>
<li>CrossEntropyLoss (for multinomial classifications)</li>
<li>BCELoss (Binary cross entropy loss for binary classification)
But more on these in upcming tutorials.</li>
</ul>
<pre><code class="lang-python">loss_func = nn.MSELoss()

model_output = mynet(X)
loss_value = loss_func(model_output, Y)
print(loss_value)
</code></pre>
<p>In pytorch, for loss function, the first parameter is the predicted output and the second parameter is the actual output required.</p>
<p>Now, its time to optimize the model using optimizer that tries to reduce the loss value. The inputs to the optimizer will be weights, biases and learning rate when updating the weights.</p>
<p>Here, we will be employing Stochastic gradient descent (SGD), other optimizers will be used for other use cases.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> torch.optim <span class="hljs-keyword">import</span> SGD
opt = SGD(mynet.parameters(), lr=<span class="hljs-number">0.001</span>)
</code></pre>
<p>Now we need to perform the following steps in a single epoch together and run all the steps for number of loops.</p>
<ul>
<li>Calulate loss values correponging to given input and output</li>
<li>calculate the gradient correponding to each parameter</li>
<li>update the weights based on learning rate and gardient</li>
<li>flush out previous epochs gardient</li>
</ul>
<pre><code class="lang-python">loss_history = []
<span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> range(<span class="hljs-number">50</span>):
  opt.zero_grad() <span class="hljs-comment"># flush out previous epochs gradients</span>
  loss_value = loss_func(mynet(X), Y) <span class="hljs-comment">#calculating loss value</span>
  loss_value.backward() <span class="hljs-comment">#performing back propagation</span>
  opt.step() <span class="hljs-comment">#update weights according to the gradients calculated</span>
  loss_history.append(loss_value.cpu().detach().numpy()) 
  <span class="hljs-comment">#The last step to convert all the tensors in GPU to cpu and then to numpy since numpy doesnt support GPU.</span>
</code></pre>
<p>Lets plot out result.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt
%matplotlib inline
plt.plot(loss_history)
plt.title(<span class="hljs-string">'loss variation over increasing epochs'</span>)
plt.xlabel(<span class="hljs-string">'epochs'</span>)
plt.ylabel(<span class="hljs-string">'loss value'</span>)
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1650769403425/5vtxU7BDt.png" alt="image.png" /></p>
<h2 id="heading-saving-and-loading-the-pytorch-model">Saving and loading the pytorch model</h2>
<pre><code class="lang-python"><span class="hljs-comment">#saving model</span>
torch.save(mynet.state_dict(), <span class="hljs-string">'mymodel.pth'</span>)
</code></pre>
<pre><code class="lang-python"><span class="hljs-comment">#loading model</span>
mynet.load_state_dict(torch.load(<span class="hljs-string">'mymodel.pth'</span>))
</code></pre>
<p><a target="_blank" href="https://github.com/bibekebib/Pytorch-Tutorials-Code-Base/blob/main/Learning_Pytorch_fundamental_Neural_Network_Structure.ipynb">GitHub Link</a> </p>
]]></content:encoded></item><item><title><![CDATA[A gentle guide on TF-IDF!]]></title><description><![CDATA[Hi!, How are you?
Today lets, see how we can represent text data of a corpus in array format. 
As we know, computers only understand numbers, and when we are performing any machine learning algorithm, we have to encode each data into some sort of num...]]></description><link>https://www.chalisebibek.com.np/a-gentle-guide-on-tf-idf</link><guid isPermaLink="true">https://www.chalisebibek.com.np/a-gentle-guide-on-tf-idf</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[Python]]></category><category><![CDATA[nlp]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Wed, 02 Jun 2021 14:12:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622264760243/Ym9VYSZ-v.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi!, How are you?</p>
<p>Today lets, see how we can represent text data of a corpus in array format. 
As we know, computers only understand numbers, and when we are performing any machine learning algorithm, we have to encode each data into some sort of numerical format, so that the algorithms can find a pattern from that data and build a model.
And if we are into Natural Language Processing and especially text-data analysis, we have to deal with the text as data. so, in order to feed to the algorithm, it is a must-performed step that we, change the textual raw data into numerical data.
There are various ways to do it. Let's discuss those. 
The first is Bag of Words, it is just a way of counting the numbers of each text that appears in a corpus. (Here, Corpus means the entire dataset of text.)
Let's take 3 sentences.</p>
<ol>
<li>"It is going to rain today"</li>
<li>"I am going to drink coffee"</li>
<li>"I am going to capital today"</li>
</ol>
<p>If we perform Bag of words in the above example, first we make count the number of times individual items, repeats in a corpus.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Term</td><td>Frequency</td></tr>
</thead>
<tbody>
<tr>
<td>going</td><td>3</td></tr>
<tr>
<td>to</td><td>3</td></tr>
<tr>
<td>i</td><td>2</td></tr>
<tr>
<td>am</td><td>2</td></tr>
<tr>
<td>today</td><td>2</td></tr>
<tr>
<td>it</td><td>1</td></tr>
<tr>
<td>is</td><td>1</td></tr>
<tr>
<td>rain</td><td>1</td></tr>
<tr>
<td>drink</td><td>1</td></tr>
<tr>
<td>coffee</td><td>1</td></tr>
<tr>
<td>capital</td><td>1</td></tr>
</tbody>
</table>
</div><p>Now if we represent it in the tabular form, the bag of words representation looks like this.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Term/document No</td><td>going</td><td>it</td><td>to</td><td>i</td><td>am</td><td>is</td><td>rain</td><td>today</td><td>drink</td><td>coffee</td><td>capital</td></tr>
</thead>
<tbody>
<tr>
<td>1.</td><td>1</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td></tr>
<tr>
<td>2.</td><td>1</td><td>0</td><td>1</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td></tr>
<tr>
<td>3.</td><td>1</td><td>0</td><td>1</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>1</td></tr>
</tbody>
</table>
</div><p>But we can already see the problem in this Bag of Words representation, All the words carry the same importance. In the given dataset, the word 'going' is present in each of the sentences. While, words like rain, coffee, capital are present only in each sentence, and carry the main essence of the sentence. But when we represent it in the BoW model, these all words got the value of 1. So, BoW model representation, will not represent the importance of some words which can be problematic during 
    The problem we can see is it, no order is maintained, which means the semantic information is not preserved. We know, the text is sequential data, so the order of data is very important, but the BoW model doesn't care about the order of data. So, this can cause problems when we have to work on models where data need to be in proper order so that machines can learn from the data.
    If you want to perform Bag of Words in python sklearn, we can perform it as.</p>
<pre><code class="lang-python">    <span class="hljs-keyword">from</span> sklearn.feature_extraction.text <span class="hljs-keyword">import</span> CountVectorizer
    <span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
    vectorizer = CountVectorizer()
    doc = [<span class="hljs-string">"It is going to rain today"</span>,
        <span class="hljs-string">"I am going to drink coffee"</span>,
        <span class="hljs-string">"I am going to capital today"</span>]
    X = vectorizer.fit_transform(doc)
    column = vectorizer.get_feature_names()
    df = pd.DataFrame(X.toarray(), columns=column)
    df
</code></pre>
<p>In order to solve the problems with the Bag of Words Model, we use something called <strong>TF-IDF</strong>.
So what is TF-IDF?
    Tf-IDF stands for Term Frequency - Inverse Document Frequency. 
    Here, Term Frequency means the ratio of Number of Occuracnies of a word in a Document to the Number of Words in that Document.
    Term frequency, tf(<em>t</em>,<em>d</em>), is the frequency of term <em>t</em>,
<img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/dd4f8a91dd0d28a11c00c94a13a315a5b49a8070" alt="{\displaystyle \mathrm {tf} (t,d)={\frac {f_{t,d}}{\sum _{t'\in d}{f_{t',d}}}}}" /> 
where <em>f__t</em>,<em>d</em> is the <em>raw count</em> of a term in a document, i.e., the number of times that term t occurs in document d. There are various other ways to define term frequency.</p>
<p>From the above example, the term-frequency of the word going is:
    Here, going appears 3 times in the document and there are total 18 words. so, 
    tf(going) =  3/18 = 0.1666
    similarly, the tf of word to is : tf(to) = 2/18 = 0.111</p>
<p>so, let calculate the term frequency for all the terms:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Term</td><td>TF value(doc1)</td><td>TF value(doc2)</td><td>Tf value(doc3)</td></tr>
</thead>
<tbody>
<tr>
<td>going</td><td>0.1666</td><td>0.1666</td><td>0.1666</td></tr>
<tr>
<td>to</td><td>0.1666</td><td>0.1666</td><td>0.1666</td></tr>
<tr>
<td>i</td><td>0</td><td>0.1666</td><td>0.1666</td></tr>
<tr>
<td>am</td><td>0</td><td>0.1666</td><td>0.1666</td></tr>
<tr>
<td>it</td><td>0.1666</td><td>0</td><td>0</td></tr>
<tr>
<td>is</td><td>0.1666</td><td>0</td><td>0</td></tr>
<tr>
<td>rain</td><td>0.1666</td><td>0</td><td>0</td></tr>
<tr>
<td>today</td><td>0.1666</td><td>0</td><td>0.1666</td></tr>
<tr>
<td>drink</td><td>0</td><td>0.1666</td><td>0</td></tr>
<tr>
<td>coffee</td><td>0</td><td>0.1666</td><td>0</td></tr>
<tr>
<td>capital</td><td>0</td><td>0</td><td>0.1666</td></tr>
</tbody>
</table>
</div><p>Since we have calculated the term-frequency, let's discuss <strong>Inverse Document Frequency (IDF)</strong>.
IDF is calculated as the log of the ratio of Numbers of the document to the Number of documents that contain the particular term. So, measure the amount of value the word provides i.e, is the measurement of how common or how rare is the word in the given corpus.
<img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/ac67bc0f76b5b8e31e842d6b7d28f8949dab7937" alt=" \mathrm{idf}(t, D) =  \log \frac{N}{|\{d \in D: t \in d\}|}" />
with</p>
<ul>
<li><img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3" alt="N" /> : total number of documents in the corpus N = | D | </li>
<li>: number of documents where the term  t <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/65658b7b223af9e1acc877d848888ecdb4466560" alt="t" /> appears (i.e., t f ( t , d ) ≠ 0  ). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the denominator to 1 + | { d ∈ D: t ∈ d }  </li>
</ul>
<p>So, let's calculate the IDF value of some terms. 
The IDF of 'going' can be calculated as:
    Word 'going' is present in all three documents and there are since total 3 documents. so the idf value of going must be, idf(going) = log(3/)= log(1) = 0.
    What it tells that since going is present in all the 3 documents, it carries no importance at all. 
    Also, if we calculate the idf value of to, it becomes: idf(to) = log(3/2) = 0.17609
    Also, if we calculate the idf value of coffee, it becomes: idf(coffee) = log(3/1) = 0.47712
So, let's see what IDF value of each term becomes.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Term</td><td>IDF value</td></tr>
</thead>
<tbody>
<tr>
<td>going</td><td>0</td></tr>
<tr>
<td>to</td><td>0</td></tr>
<tr>
<td>i</td><td>0.17609</td></tr>
<tr>
<td>am</td><td>0.17609</td></tr>
<tr>
<td>today</td><td>0.17609</td></tr>
<tr>
<td>it</td><td>0.47712</td></tr>
<tr>
<td>is</td><td>0.47712</td></tr>
<tr>
<td>rain</td><td>0.47712</td></tr>
<tr>
<td>drink</td><td>0.47712</td></tr>
<tr>
<td>coffee</td><td>0.47712</td></tr>
<tr>
<td>capital</td><td>0.47712</td></tr>
</tbody>
</table>
</div><p>Now, it's time to do magic, calculate <strong>TF-IDF</strong>. It is simply the product of Term Frequency and Inverse Document Frequency.
If we calculate the TF-IDF value of the word to in document 1, we get. TFIDF(to) = TF(to) <em> IDF(to) = 0.1666</em>0.17609</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Term/document No</td><td>going</td><td>it</td><td>to</td><td>i</td><td>am</td><td>is</td><td>rain</td><td>today</td><td>drink</td><td>coffee</td><td>capital</td></tr>
</thead>
<tbody>
<tr>
<td>1.</td><td>0</td><td>0.07948</td><td>0</td><td>0</td><td>0</td><td>0.07948</td><td>0.07948</td><td>0.02933</td><td>0</td><td>0</td><td>0</td></tr>
<tr>
<td>2.</td><td>0</td><td>0</td><td>0</td><td>0.02933</td><td>0.02933</td><td>0</td><td>0</td><td>0</td><td>0.07948</td><td>0.07948</td><td>0</td></tr>
<tr>
<td>3.</td><td>0</td><td>0</td><td>0</td><td>0.02933</td><td>0.02933</td><td>0</td><td>0</td><td>0.02933</td><td>0</td><td>0</td><td>0.07948</td></tr>
</tbody>
</table>
</div><p>This is the final TF-IDF text representation for the example corpus. You can try TF-IDF in sklearn as given below code. </p>
<pre><code class="lang-python">    <span class="hljs-keyword">from</span> sklearn.feature_extraction.text <span class="hljs-keyword">import</span> TfidfVectorizer
    vectorizer = TfidfVectorizer()
    X = vectorizer.fit_transform(doc)
    column = vectorizer.get_feature_names()
    df = pd.DataFrame(X.toarray(), columns=column)
</code></pre>
<p>If you have tried TF-IDF in sklearn, then you can see that the results are quite different. It is because the sklearn TI-IDF vectorizer uses the log normalization method for the calculation and has tuned parameters in a different way. The above-mentioned method is the root idea about TFIDF, yet it needs to be tuned for large extensive use. </p>
<p>If you are still confused in TFIDF, let me know in the comments, until then, enjoy Learning. 
The code for this tutorial can also be found at <a target="_blank" href="https://deepnote.com/@bibek-chalise/Tfidf-code-1oZ1k9PFRpuMkueZbWln1Q">this link</a>.</p>
<h1 id="thank-you">Thank you!</h1>
]]></content:encoded></item><item><title><![CDATA[A gentle Guide to HyperParameter Tuning.]]></title><description><![CDATA[Hi!
How you doing?
Today we will be doing hyperparameter tuning with the help of the RandomisedSearchCV algorithm.
What are Hyperparameters actually?
Let’s see this way when using a machine learning algorithm, there are various parameters associated ...]]></description><link>https://www.chalisebibek.com.np/a-gentle-guide-to-hyperparameter-tuning</link><guid isPermaLink="true">https://www.chalisebibek.com.np/a-gentle-guide-to-hyperparameter-tuning</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Python 3]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[algorithms]]></category><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Wed, 12 May 2021 06:18:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1620800142541/wNlbH7u64.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Hi!</strong>
How you doing?
Today we will be doing hyperparameter tuning with the help of the RandomisedSearchCV algorithm.</p>
<h1 id="what-are-hyperparameters-actually">What are Hyperparameters actually?</h1>
<p>Let’s see this way when using a machine learning algorithm, there are various parameters associated with the instance or the method we using of a particular algorithm By default, it is provided, which gives significantly good results. However, if we want to increase the accuracy of the results, we have to make some tweaks to the default parameters. And the process of tuning such parameters with the hope of better accuracy of the given model using a particular algorithm instance can be called Hyperparameter Tuning.
If it looks like Jargon, we will look at an example of the default parameter of the Support Vector Machine Classifier SVC instance.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1k2igfd0brw0d1t5xkvl.png" alt="SVC parameters" /></p>
<p>In the above example, when we see the parameters of the SVC instance, we get the default parameters as mentioned above. So when we instantiate the SVC instance, the default parameters are passed in it. But when we visit the official documentation of <a target="_blank" href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html">SVC</a>, we see a bunch of these parameters can be passed as a dictionary or list. So, we use that feature of such flexibility of these parameters and try a different set of parameters and find the best parameters that give the best results. </p>
<p><strong>So, what we do is take a  dataset and work on it and find the accuracy by default parameters and then tune few parameters to increase the score.</strong></p>
<p>For this task, we will be using Jupyter Notebook. If you like doing it in a local machine it's okay, but I highly suggest using the online Jupyter Notebook.  <a target="_blank" href="https://colab.research.google.com/notebooks/intro.ipynb#recent=true">Colab</a> by Google is a very good resource that we can use for free and Deepnote is another alternative to Google Colab. 
Here, I personally will be using <a target="_blank" href="https://deepnote.com/">Deepnote</a>.</p>
<pre><code class="lang-Python">    <span class="hljs-comment">#importing required moduls.</span>
    <span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd <span class="hljs-comment">#for tabular data frame analysis</span>
    <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np. <span class="hljs-comment">#Form mathematical Manipulation</span>
    <span class="hljs-keyword">import</span> matplotlib.pyplot <span class="hljs-keyword">as</span> plt <span class="hljs-comment">#for data visulaization</span>
    <span class="hljs-keyword">import</span> seaborn <span class="hljs-keyword">as</span> sns <span class="hljs-comment">#Seaborn is developed on top of matplotlib library</span>
</code></pre>
<p>So, we need a dataset for it. There are various datasets available in <a target="_blank" href="https://www.kaggle.com">kaggle</a>.  And we take a simple dataset from <a target="_blank" href="https://www.kaggle.com/ronitf/heart-disease-uci">Kaggle Heart Disease Dataset</a>.</p>
<pre><code class="lang-Python"><span class="hljs-comment">#loading dataset</span>
df = pd.read_csv(<span class="hljs-string">'./heart.csv'</span>) <span class="hljs-comment">#The dataset is downloaded and saved to root folder.</span>
df.head()
</code></pre>
<p>After this, we get the first five rows of the dataset. </p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ppl9tkd7pcdkon7oa99d.png" alt="df.head()" />However, if we look closely to <strong>Target</strong> feature, we see all 1's and if we make even close oversvation with df['target'], we see a pattern that first half of the dataset has 1 value in target feature and remaining has 0. This can be a great problem and can result to bad in the training, testing phase. So, what we do is shuffle this dataset using pandas.sample() method.</p>
<pre><code class="lang-Python">    df = df.sample(frac = <span class="hljs-number">2</span>, random_state=<span class="hljs-number">42</span>, replace = <span class="hljs-literal">True</span>)
    df.head()
</code></pre>
<p>After this, when we analyse the dataset, we see random distribution of 0's and 1's in the target variable. 
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uo7tfn2kpahcyylm3dhf.png" alt="df.head() after shuffling" />
 Now, what we do is see if there are any null or missing value in the dataset. </p>
<pre><code class="lang-Python">    df.isnull().sum()
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1y8wclssbhvvagrkptns.png" alt="df.isnull().sum()" />
Here, we can see that, there are no null values. </p>
<p><strong>Now we are set to go for machine Learning Tasks</strong></p>
<p>First we import required modules.</p>
<pre><code class="lang-Python">    <span class="hljs-comment">#Import Machine Learning Libraries</span>
    <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split
    <span class="hljs-keyword">from</span> sklearn <span class="hljs-keyword">import</span> svm 
    <span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> RandomizedSearchCV
</code></pre>
<p>The required modeules are imported. train_test_split is for dividing the dataset into training and testing sub-dataset. The svm is Support Vector Machine Algorithm. The RandomizedSearchCV is for hyperparameter Tuning. Alternative to RandomizedSearchCV is GridSearchCV, however RandomizedSearchCV is likely to be faster than GridSearchCV.</p>
<pre><code class="lang-Python">    X = df.drop([<span class="hljs-string">'target'</span>], <span class="hljs-number">1</span>)
    y = df.target
    print(X.shape, y.shape)
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gidcu7d4o5cqgx74kkv8.png" alt="Creating X and y" />
Then we created a dataframe X which consists of Feature Variables, target is dropped because it is not Feature variable, rather it is target variable. y is defined as pandas series object with target as it only feature.</p>
<pre><code class="lang-Python">    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=<span class="hljs-number">0.2</span>)
    clf = svm.SVC()
    clf.fit(X_train, y_train)
    print(<span class="hljs-string">"The accuracy of the classifier is {}"</span>.format(clf.score(X_test, y_test)))
</code></pre>
<p>In this step, we divided X, y into train and test sub-dataset. train_test_split returns four objects, so we  stored those values into X_train, X_test, y_train, y_test. The parameters are  X, y,  and the test_size=0.2 parameters defines what percentage of dataset is to be described for test_set which in this case are X_test and y_test.
then we instantiated SVC (Support vector Classifier) into vairable clf and used fit() method to fit, X_train and y_train.
The accuracy of the classifier is found to be mere 68.85%.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/41c4ktnhxdwt7t3o38wy.png" alt="Initial Accuracy" />
Sadly, 68.85% percentage accuracy is very less, so we try to tune certain parameters and improve the accuracy. </p>
<pre><code class="lang-Python"><span class="hljs-comment">#Lets try tuning some hyperparameters.</span>
    param_dist = {<span class="hljs-string">'C'</span>: [<span class="hljs-number">0.1</span>, <span class="hljs-number">1</span>, <span class="hljs-number">10</span>, <span class="hljs-number">100</span>, <span class="hljs-number">1000</span>],
    <span class="hljs-string">'gamma'</span>: [<span class="hljs-number">1</span>, <span class="hljs-number">0.1</span>, <span class="hljs-number">0.01</span>, <span class="hljs-number">0.001</span>, <span class="hljs-number">0.0001</span>],
    <span class="hljs-string">'kernel'</span>: [<span class="hljs-string">'rbf'</span>]
    }
    svc_hyper = RandomizedSearchCV(SVC(), param_distributions=param_dist, verbose=<span class="hljs-number">2</span>, cv=<span class="hljs-number">3</span>, random_state=<span class="hljs-number">42</span>, n_iter=<span class="hljs-number">10</span>, scoring=<span class="hljs-string">'accuracy'</span>)
    svc_hyper.fit(X_train, y_train)
</code></pre>
<p> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eiz1cgzae9y95mh9bkrs.png" alt="Fitting for the best parameters" />
 Here, we used different set of parameters like C, gamma and kernel to loop through set of combinations of prameters and at the end define which set of combination of these parameters gives the best result. 
 Here <strong>C</strong> is Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty.
 The parameter <strong>gamma</strong> is Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.
 And the parameter <strong>kernel</strong> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used.
 Here, we used only 'rbf' because other kernel takes significant time to get trained. You, yourself can try other kernels and see if that changes the results. 
 To know more about SVC,  go through <a target="_blank" href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html?highlight=svc#sklearn.svm.SVC">this</a>.</p>
<pre><code class="lang-Python">    svc_hyper.best_params_
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fulffosjlv9sgjllm8sh.png" alt="best parameters" />we get the best parameter as <strong>{'kernel': 'rbf', 'gamma': 0.001, 'C': 1000}</strong>.  So, lets use it to fit the data.</p>
<pre><code class="lang-Python">    best_svc = SVC(C=<span class="hljs-number">1000</span>, gamma=<span class="hljs-number">0.001</span>, kernel=<span class="hljs-string">'rbf'</span>)
    best_svc.fit(X_train, y_train)
    print(<span class="hljs-string">"The accuracy of the classifier is {}"</span>.format(best_svc.score(X_test, y_test)))
</code></pre>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2aejxf9z88cmbxak19c.png" alt="Final Accuracy" />
After fitting the data using SVC method and using the best parameter, we got the accuracy to be <strong>94.26%</strong>. That's remarkable to what we observe at first place as <strong>68.85%.</strong></p>
<p>Hence in this way, we can use RandomizedSearchCV to tune the parameters and increase the accuracy. </p>
<p>GitHub Repo of the code: https://github.com/bibekebib/Hyperpramater-tuning-article-code</p>
<p>Deepnote Shared code: https://deepnote.com/@bibek-chalise/Hyperparameters-Tuning-Tutorial-j46REW6sTXaWqbz8APnolQ#</p>
<p>If you want to try this with other Algoithms, here is a list of parameters that you can hypertune. </p>
<pre><code class="lang-Python">    <span class="hljs-comment">#Random Forest </span>
    n_estimator = [int(x) <span class="hljs-keyword">for</span> (x) <span class="hljs-keyword">in</span> np.linspace(<span class="hljs-number">100</span>, <span class="hljs-number">1200</span>, num=<span class="hljs-number">12</span>)]
    max_depth = [int(x) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> np.linspace(<span class="hljs-number">5</span>, <span class="hljs-number">30</span>, num=<span class="hljs-number">6</span>)]     
    min_samples_split = [<span class="hljs-number">2</span>, <span class="hljs-number">5</span>, <span class="hljs-number">10</span>, <span class="hljs-number">15</span>, <span class="hljs-number">100</span>]
    min_samples_leaf = [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">5</span>, <span class="hljs-number">10</span>] criterion = [<span class="hljs-string">'gini'</span>, <span class="hljs-string">'entropy'</span>]
    param_dist = { <span class="hljs-string">"n_estimators"</span> : n_estimator, <span class="hljs-string">"max_depth"</span> : max_depth, <span class="hljs-string">"min_samples_leaf"</span>:min_samples_leaf, <span class="hljs-string">"criterion"</span>:criterion, <span class="hljs-string">"min_samples_split"</span>:min_samples_split }

    <span class="hljs-comment">#KNN</span>
    n_neighbors = [int(x) <span class="hljs-keyword">for</span> x <span class="hljs-keyword">in</span> np.linspace(start = <span class="hljs-number">1</span>, stop = <span class="hljs-number">100</span>, num = <span class="hljs-number">50</span>)]
    weights = [<span class="hljs-string">'uniform'</span>,<span class="hljs-string">'distance'</span>] 
    metric = [<span class="hljs-string">'euclidean'</span>,<span class="hljs-string">'manhattan'</span>,<span class="hljs-string">'chebyshev'</span>,<span class="hljs-string">'seuclidean'</span>,<span class="hljs-string">'minkowski'</span>] 
    random_grid = { <span class="hljs-string">'n_neighbors'</span>: n_neighbors, <span class="hljs-string">'weights'</span>: weights, <span class="hljs-string">'metric'</span>: metric, }

    <span class="hljs-comment">#Logistic Regression</span>
    param_dist = { <span class="hljs-string">'penalty'</span> : [<span class="hljs-string">'l1'</span>, <span class="hljs-string">'l2'</span>], 
    <span class="hljs-string">'C'</span> : [<span class="hljs-number">0</span>, <span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>] 
    }

    <span class="hljs-comment">#Gaussian Naive</span>
    params_NB = {<span class="hljs-string">'var_smoothing'</span>: np.logspace(<span class="hljs-number">0</span>,<span class="hljs-number">-9</span>, num=<span class="hljs-number">100</span>)}
</code></pre>
<h1 id="happy-learning">Happy Learning!</h1>
]]></content:encoded></item><item><title><![CDATA[Few feet Apart.]]></title><description><![CDATA[“Bibek”, she messaged just before I was about to turn off the WiFi.
Cold outside, you all know what Pokhara serves in the mid-winter. Too cold, one blanket is never enough to comfort us. Already had switched off the light, was about to switch off my ...]]></description><link>https://www.chalisebibek.com.np/few-feet-apart-5b1a470274f5</link><guid isPermaLink="true">https://www.chalisebibek.com.np/few-feet-apart-5b1a470274f5</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Thu, 18 Mar 2021 14:17:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111161011/Xnl7iJFWg.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>“Bibek”, she messaged just before I was about to turn off the WiFi.</p>
<p>Cold outside, you all know what Pokhara serves in the mid-winter. Too cold, one blanket is never enough to comfort us. Already had switched off the light, was about to switch off my eyes too.</p>
<p>“Ohh! Somebody messaged me, Can’t believe my eyes.” I replied with winky emoticons.</p>
<p>“Pagal!”</p>
<p>She Fired some angry emoticons.</p>
<p>I called her, she received the call in a microsecond.</p>
<p>“Hello, angry girl.”</p>
<p>“I am not talking with you.”</p>
<p>“I heard something like this a few hours back.”</p>
<p>She always does this, her mood and her anger swing more than the bouncy tracks of Lords Cricket Stadium.</p>
<p>We had this conversation the same day a few hours back, which ended with a romantic fight.</p>
<p>“I will never call you, never going talk to you, ever and this ever is for true. Never.”</p>
<p>I just said, “Okay my angry flappy bird, see you never.”</p>
<p>“I hate you” she replied.</p>
<p>I was blushing by this side, however, Replied to her, “I hate you 3000, and always remember, I hate you the most. Don’t call me, keep it in your mind, don’t call me.”</p>
<p>“I hate you even more.” she continuously copy-pasted this.</p>
<p>“How was the day?” she looked even more beautiful when the ray of the LED light kisses so softly by her lips, the glow I see in those sparkling eyes, I wish I had words for it. The reasons we are together, her love for me, that endless love which a fight can never end. She is crazy, that craziness drives me mad each day, makes us closer each time, deep inside i miss her, she misses me even more. Different cities, the overloaded pressure on my head, her studies, our respective dreams, the small world we build each day under the canopy of respect and attachment we share, and the miracle of how we see each other life, we are stronger, just perfect regardless the imperfections we own.</p>
<p>“I missed my heart and finished the whole damn chapter. Bibek, I know I will crack the job, I feel it.” I felt the child, I felt the emotions in those words, a dream, a charisma in her eyes. So vulnerable she, yet so fighter, I just paralyze in those symmetrical eye-lined contours of lips and those deep eyes.</p>
<p>“You going to make me, us proud one day for sure.”</p>
<p>“Didn’t you miss me?”, She wanted to feel special, she was special but that’s she, we all want to be missed by someone, loved by someone, that someone that sees us differently, that treats us especially.</p>
<p>“Nah, why would I miss you, Sorry!” I said with a winky smile.</p>
<p>“Goodbye, Goodnight, I don’t wanna talk to you, I just wanna kill you!”</p>
<p>“Pagal! I hate you.”</p>
<p>“I know.”</p>
<p>Knock! knock,</p>
<p>“what’s up, man?”</p>
<p>“Water!”, “talking with vauju??”</p>
<p>“Yeah.”</p>
<p>“Was it Samir?” She asked me after my roommate returned.</p>
<p>“Umm, Came by to drink water.”</p>
<p>“Why did you choose the kitchen to sleep in?” She asked.</p>
<p>“We can only afford two rooms, one kitchen, and a bedroom, let’s just say we fought head to toe and I landed here.”</p>
<p>“Pretty bad luck my man” She laughed after saying this.</p>
<p>“Yep, not gonna argue there. I have you.”</p>
<p>“Ouch, that’s cute. You must do some decent good deeds to get me!”</p>
<p>“And I heard something like Vauju from his mouth, what’s that?”</p>
<p>I said, “Me his bro and you vauju, you know every one of my group calls you vauju. Literally, they never tell your name, always call you Vauju. ‘Apekshya’ equals vauju.”</p>
<p>“Yuck!, Vauju and me, too young to be called by it. I love my Name, call me Apekshya.”</p>
<p>We both laughed.</p>
<p>“Bibek.” She called my name like she is about to say something serious.</p>
<p>“Hajur, Apes,” I call her apes, so ugly in words right, she too chuckles, when I write in the message, but when Invoice, she loves being called by her nickname.</p>
<p>“I fear sometimes of us, you know we have this difference by, I hate this society. “</p>
<p>“Don’t worry, If sasura ba won’t go by our way, we do what every traditional love story couples do, elope away.”</p>
<p>“Be serious Nah!!” She was serious.</p>
<p>“I am serious k, the moment he knew that I love you, he will instantly be like, “hey Simran ji ley Apni Jindagi”. In fact, he won’t doubt to give your sister’s hand too.”</p>
<p>“I swear, I gonna kill you!” She laughed so hard that her elder sister too woke up.</p>
<p>“It’s 12, you still awake, If papa listened, you know we both die.” I was just listening.</p>
<p>“Didi says hi to her dearest, what you call each other, Best Friends. Why you say Best friends to everyone around me. You so ruined it’s meaning.”</p>
<p>I laughed, “she is my best among every others k.”. she smiled.</p>
<p>“I know, what you were talking this afternoon, you both circled me and my habits, I am counting each of your acts. The day, I meet you, I gonna hit you so much.”</p>
<p>“I miss you,” I said.</p>
<p>“I miss you too baby, it’s been three months since I last saw you.”</p>
<p>“I will be there by this New Year, baked something in my mind, you gonna love it.”</p>
<p>“And that is???”</p>
<p>“There is something called SURPRISE in the English dictionary, search its meaning tomorrow,” I said.</p>
<p>I feel the curiousness in her breath.</p>
<p>“I want Kitkats, dairymilk and so much Chocolates, with whatever you have planned. You once said, You gonna buy me chocolates every-time I smile.” And she smiled, increment the total by one.</p>
<p>“I remember everything, But if i could only give you one thing, in between Hug and Chocolates, what would you choose?”</p>
<p>“Chocolates through your lips!” I laughed.</p>
<p>“Okay Boss, Chocolates, lips and something special in your service. What else my Lord?”</p>
<p>“Just you and your love. I really miss you, I miss you so badly.”</p>
<p>“I know, just few months and I will be there.”</p>
<p>There was thundering outside, another reason I hate Pokhara, always raining cats and dogs. I sometimes think Indra Vauman has this terrible problem with Pokhreli People.</p>
<p>And when it thunders, I fear the most, often I sleep alongside Samir, he hates it but I am a terrible person to handle it alone, he bears this. Had to, he is my best pal.</p>
<p>“Is it thundering there too?” she lives in Chitwan, 140KM apart, the weather is almost the same in both the place.</p>
<p>“Umm, pretty badly.”</p>
<p>“You going to the other room to sleep?” she asked with a caring sense. It’s weird how we love others right, how hormones works when the person we care is in problem, so complex structure we develop in coarse of evolution.</p>
<p>“No not today may be? I want to win over my fear this time.”</p>
<p>“Ohh ho, Strong Boy. Be careful, if you feel uncomfortable, please don’t disturb me.”</p>
<p>“And now, you’re so dead.”</p>
<p>“I think we should sleep now, its already 1, you have to go by office tomorrow right?”</p>
<p>“Umm yar, I hate the Job, I wanna do something on my own, my own office, my own ideas, just few people but we build something that makes sense to the purpose of my living.”</p>
<p>“You will do it, I have full faith on you, One day you standing infront of thousands, inspiring them to dream high, motivating to follow the heart, I be in the corner of the room cheering for all your achievements.”</p>
<p>Then suddenly it hit me, this is what I said I would be during those days of colleges. But nothing has changed, same Pokhara, same rainfalls, her on the phone and the same roomie. What am I doing?</p>
<p>“You are my strength, Never be in the corner, always by my side at whatever I achieve.”</p>
<p>“I love you, talk to you tomorrow, If you say something more, I gonna cry.” She was literally on tears, the tears of love, the future we hold together that’s just grows with each sunshine.</p>
<p>“Bye, I love you too.”</p>
<p>Then as usual I went back to existential crisis…</p>
]]></content:encoded></item><item><title><![CDATA[Goodnight, See you!]]></title><description><![CDATA[“So finally, you admit that you fell for me. But remember, we have met just once. I may not be as good as you thinking. I may be cruel, I may be crazy, I may be mad, anything, demon beside this face.”

“So, you believe in love at first sight!”.
“No, ...]]></description><link>https://www.chalisebibek.com.np/goodnight-see-you-a1db729db60e</link><guid isPermaLink="true">https://www.chalisebibek.com.np/goodnight-see-you-a1db729db60e</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Sun, 29 Mar 2020 14:48:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111388501/2J4HJVD3_.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>“So finally, you admit that you fell for me. But remember, we have met just once. I may not be as good as you thinking. I may be cruel, I may be crazy, I may be mad, anything, demon beside this face.”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111104908/O_331lezW.jpeg" alt /></p>
<p>“So, you believe in love at first sight!”.
“No, not at all”, I replied.
“If that’s the case, we haven’t met afterwards that day right? Then...”
“Yes”.
“But you just said you like me”. She asked me.
“Umm, I really do like you”. I wrote with a blush in my face.</p>
<p>It was nine in the evening, the Chaitra of Pokhara was behaving in different way this year. Humid but more often than the previous years. Still, we unconsciously were made to comfort ourselves under a light blanket. Don’t know the reason, but looks like the global warming impact. Whatsoever, let the world concern more on it, that moment, what I know was that I was on cloud nine.</p>
<p>Two sad emotions forced from the other side, I could feel it.
We were chatting on Viber, I pulled out the gut to ask her number last time we chatted on Facebook. At first, she didn’t give the number but at the end of the chat, she gave to me. Aside of her restrictions on calls and messages on the number, I did it. Few scolding on the initial but what more matters was her texts. I too asked her going out on “JUST FOR COFFEE” kind of date, but she always replied as a normal girl would say. “I know about guys like you!”. However she hasn’t stopped talking to me. She is different than others.</p>
<p>“It’s not that I started falling in love the day I first saw you, but it’s true. I like you. Your picture and you are captured in my mind since the day.’
“My picture! Ohh. Can I know how am I in the picture?”
“Beautiful”.
“Just beautiful?”
“Precious, trust me Mona Lisa is a myth, you are the true art of god.”
“You need to learn so much in praising a girl. I doubt but well tried.”
“The princess has all right to doubt, but I promise to clear each doubt each time.” I thought the message this time looks so dull and incomplete, so I added few love hearts with it.
We met a month ago in a street festival. She was the friend of one of my friend, I send her request, request accepted after few more days, exchanged few reactions on posts and stories. Then we started texting each other.</p>
<p>“From when I am your princess??”.
“You are my queen”, I typed but looked so stereotype text to send. I deleted and retyped, “You are the princess of your father and mother and you were born to rule hearts of many.”
This text is also not that great line, but i don’t know I strongly feel like to send the moment I typed. After sending, before the next reply, my heart pulsed so hardly.
“Many?” she asked.
“Prince!! “ I replied.
“Thank you, mister, who trying to be prince, I guess. But these flirty lines not going to work on me! I am not like other girls.”
“Thank you!” I replied.
“But why????”, I could sense the urgency of answer in this reply.
“For not being same as any other girls. You are just perfect the way you are. That’s the reason I fell for you.”
 “So finally, you admit that you fell for me. But remember, we have met just once. I may not be as good as you thinking. I may be cruel, I may be crazy, I may be mad, anything, demon beside this face.”
“Oh really, I will stop following you then.” I replied to her message.
“Bye! You just proved what you are.”
“Sorry, sorry! Just kidding. Your par level of craziness is the pill for my life.”
“I don’t believe who changes colour with time.”
“I was just kidding k, oh! Princess of craziness please forgive this poor soul. I will do anything to pardon my mistake and am ready to face any punishment!”
“Any punishment”. The reply was very fast.
I had fallen in the trap of hers. No was not the option at all. So I texted, any punishment my lord.
She laughed at my reply.
“Ok! Goodnight. I am sleeping now!”
“So fast, but why?” I replied within a second.
“Oh! Hello, I have strict habits, I am not sparing my habits for random guys flirt.” That was her reply.
“Random guy, now this hurt. Flirt is still okay.” Now, it was my turn to express my moody expression to her.
“Say sorry” she replied like it was mine mistake. But why she must be angry, I must be angry as per the script, right! But no, she is angry. Girls are so unpredictable.</p>
<p>“Me, sorry, why, you just told me random and still you want apologies from me.”
“Because you just admitted the fact that you are flirting with me.”
“But with you saying me random guy, I am into tears. God going to punish you for these tears of innocent person of me.”
“No, no, no! You such an idiot. Plus, dramatic and negative marking for bad acting.”
“Sorry!” I apologized.
She with sense of curiosity asked “Why?”.
“I am not flirting; I am saying what I genuinely feel for you.”
“I am also sorry for saying Random, you aren’t random guy.” She replied with few heart emoticon.
“I am not random, ohh! Happiness exceeding and heart feeling something - something.”
“Don’t overthink.”
“Ha ha, okay. Kidding. Sleeping or changing habit for a random guy.”
“Youuu!! I am sleeping, you too sleep.”
“One thing!” I asked.
“???”
She replied.
I asked either we can have meet over coffee house tomorrow at 4.
“See you today if I made any mood to meet this random guy for second time.”
Already past 12.
“Okay, Princess, as you wish. Goodnight. See you.”
“Goodnight, sweet dreams. Take care. Bye”, she texted.</p>
]]></content:encoded></item><item><title><![CDATA[A letter to my first love]]></title><description><![CDATA[Lately, I have started using Instagram. I am sorry, but I stalk your profile each day and it is the part of my life.

 Dear my first love,
 “I love you”, “I really do like you”. Dozens of poems and tens of stories of you and me, we just the character...]]></description><link>https://www.chalisebibek.com.np/a-letter-to-my-first-love-a0586349b068</link><guid isPermaLink="true">https://www.chalisebibek.com.np/a-letter-to-my-first-love-a0586349b068</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Wed, 26 Feb 2020 12:52:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111115549/DnDiUKVzK.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lately, I have started using Instagram. I am sorry, but I stalk your profile each day and it is the part of my life.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111112496/gQv_z6ZsH.png" alt />
 Dear my first love,
 “I love you”, “I really do like you”. Dozens of poems and tens of stories of you and me, we just the characters of those tales, hundreds of love quotes in your name. I remember each one of those I used to keep inscribing with black ink on your name. How precious each word associated with your name, just the way you are precious to me. I hope those words were enough to describe the feelings and emotions I had in my heart for you.
A shy boy with minimal social skills and possibly the outlasted &amp; never visually seen classmate of yours’, nurturing thoughts and love in each part of his heart. Me trying to cross the boundary and break the walls, the darkest separating lines drawn between you and me. I must be foolish, I must admit it, I never deserved you, I never will, you the princess and dream girl of any guy out there and me the unsung one with zero self-esteem, zero self-confidence and literally underlying talent. You that super duper cool girl with ultimate talents and skills, and me the last among every other. Still the thoughts of being with you, to spend days where we are par from every virtual thing we owe, provoking each social beliefs that created a boundary between us, creating a world where there’s no limit of love we have for each other. You were the charm to my dull life and color to my black and white living. Yes, you were my everything, you were and is my first love.
 I can never explain how much I used to think of you, us being in the same square room for 2 years were the best of the days I possibly could ever have. I went unnoticed all those years but I used to think you, feel you each moment. Every fickle of seconds our eyes met intentionally or by the flavor of the time, a strong breeze of comfort stringed beats of love, the heart throbbed in the rhythm of fulfillment.
Your eyes, your eyes-the perfectly curved eye-brows, the black finish of the eyeliner, crimson shyness in your face, the world within your eyes, those slender threadlike silky black hair. The art of killing with a smile aligned with pink glossy lips is something that you must have mastered in, you the professor of making fall in love for you every time I see you. And the ring-finger tip-sized red tika in your broad forehead, the bonus craft to your charm.
 Dear, my first love, you remember the farewell program right. The day, I, you and we separated with thousands of memories, hundreds of thoughts and dozens of sad faces. The indeed last time, I saw you, the day I literally cried of being far from you, you being far from my sight. Tears rolled down my eyes like its been never, I couldn’t show you what you meant to me, how much I adore you, how much I feel for you. A feeling of being alienated forever from your eyes started killing since then. Unfortunately, I went undiscovered that day too from your eyes. I saw you talking with literally everyone, However, me! how I went unseen? Sometimes, these sorts of memories wet my thoughts and dry my mind.
 The day I remember for another reason, I saw you wearing saree for the first time. The purple and sky-blue mixed saree, how someone can be beautiful as you were that day. All I have ever heard was Cinderella as a princess and angels of heaven, I bet you look gorgeous enough to make them all feel jealous. I really wish I had said “Oh! my love. You look so great today.” All-day all I been doing was tracing your presence through my eyes. Forgive me, its pervert act of mine, but the delights in your eyes made me be so.
Dates and numbers freak me out, I always feel difficult to remember them. However, a date that I don’t have to force my mind to remember. Undoubtedly your date of birth, the date of March 9, 2014. The date that is something that I will never forget. The precious date of my life. Firstly, your birthday unforgettable for a reason and in another, we talked with each other for the first time. Yes, the same moment I live with each day, the same 5-second talk that keeps looping in my mind. I don’t know how I managed to heap up my guts and wish you birthday. In between others’ wishes, mine filtered but I know you replied for it. For me, more than that definite interval, the specific interval matters the most.
And now, years have passed, we moved on with our pace of life, we changed and things have changed. Paths diverted to an extent from where merging us is never possible. You may or may not remember me, but I do remember you each day. You were, is and will always be my first love, your memories are not going to vanish within me till the last breath. How can I forget the girl who was the reason to attend schools, the one to heal the pain of loneliness, how can I forget the girl whose eyes whenever met with me though implicitly always made my day.
You are the breeze of the spring, the color of fall, the art of the summer and beauty of the winter. In fact, you are the shadow of my living and the warmth of the chilly nights. Every drop of rain falling from the cotton sized clouds makes me remember you. Sunlight each day makes me hopeful of meeting you someday, and the nights give me dreams of you. I live you in each presence, in every second I see you.
It’s hard to accept the fact you never thought me that way, you dream someone else the way I dream you, you aspire to walk holding someones else hand down the walk of life. It penetrates my heart, I sometimes cry and sadness entitles me. All I see is you and your image and thinking you to be with someone else burns my heart and soul. I almost die in every thought of yours. How painful is it to let go your beloved eloped away? I know my love is one-sided and I may never be able to confess with you but still, I can’t accept the fact of you becoming someones else queen.
 Lately, I have started using Instagram and started to follow you. Sorry, but stalking your profile has been part of my daily routine. Your TikTok videos and those funny posed pictures, I see all-not for once, repeatedly. I have told about you to my roommate, you know something, when I was telling him about you, I almost cried. He constantly asks me to forget you and live a life above your memories but I know my inner me will never do that though I nod my head in front of him. I see you mentioning him sometimes in your posts and uploads your couple goals memes, and sometimes those emotional sad quotes. You are still unpredictable.</p>
<p>It's 3 in the morning
Few hours, before a bird starts to chirp
A song of good morning
And I am here stalking your profile
Like another crazy stalker 
Maybe I am, but why I am?
The reason you will never know.</p>
<p><em>“If I fell off tomorrow would you still love me?</em>
<em>If I didn’t smell so good would you still hug me?</em>
<em>If I locked up and sentenced to a quarter-century,</em>
<em>Could I count on you to be there to support me mentally?</em></p>
]]></content:encoded></item><item><title><![CDATA[A Journey Part (i)]]></title><description><![CDATA[Spoiler alert: “So Much So……”
On Feb 12, 2020, I feel like I did something that I hadn’t done before. I gave a presentation to around 50 people with full confidence, no shivering legs, no broken mind, no sweety palm, no fumbling of words and no dry l...]]></description><link>https://www.chalisebibek.com.np/a-journey-part-i-c2833fa379b4</link><guid isPermaLink="true">https://www.chalisebibek.com.np/a-journey-part-i-c2833fa379b4</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Thu, 13 Feb 2020 03:04:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111123277/kcq_lGZ6x1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Spoiler alert: “So Much So……”</p>
<p>On Feb 12, 2020, I feel like I did something that I hadn’t done before. I gave a presentation to around 50 people with full confidence, no shivering legs, no broken mind, no sweety palm, no fumbling of words and no dry lips (Sorry, I am joking for this one). From the first slide, I felt like my lips were getting as dry as I am in the center of Sahara Desert (Shit! Bad phrase).</p>
<p>I was prepared with the slide a week ago, I pulled, pushed the ideas and slides from the community that I was engaged on and then merged all of it. I had that idea running in my mind, I will deliver the best presentation that everyone in the room has been seen. But, 29 freaking slides. <strong>What?</strong> Yeah, a freaking number of slides, guess how bored anyone will be in the room will be watching me trying to explain 29 slides with each point on it. The first rule of the presentation I learned at the end of the day, <strong>“Keep Slides the less as you can”</strong>. Next time, things will turn round.</p>
<p>The second law of presentation that I violated, <strong>“Keep memes in-between”. Fuck hell!</strong> I didn’t do it. How the hell in the world, I can miss it. Next time, a to-do list when having a presentation, <strong>Memes mandatory.</strong></p>
<p>Enough jokes for the day. Let’s dive into the serious talk. My journey of being a Speaker.</p>
<p>Last Dashain, I applied for Microsoft Student Partner, which I wanted to be for a long time ago. I also tried for GitHub Campus expert but couldn’t be for various reasons. So, there was no sure chance of me getting an opportunity to be MSP. But last January, I got the confirmation mail from Microsoft. And I became an MSP. The biggest student partnership program in the world.</p>
<p>Then follow a mandatory post on Facebook. <strong>“Started job at Microsoft”</strong>. I sometimes wonder how stimulation works on our mind, (The topic let be assumed on the next blog). Congratulation received friends and family. A proud moment anyone can have as a student, to get a partnership from Microsoft. The learning path, promotion to the second stage of the program and trying hard to achieve on the third level.</p>
<p>Now the big day. To give a presentation for everyone who wishes to be MSP.</p>
<p>(That will follow on the next blog Part II of this series.)</p>
]]></content:encoded></item><item><title><![CDATA[Day of a 24’]]></title><description><![CDATA[source: internet
“Ok! That’s fine, just one more warning to solve, looks like I finally finished this freaking project.”
Half-past eight and I anyhow had to show my half-asleep, half toxic and half-drunken face to my boss in an hour.
And please don’t...]]></description><link>https://www.chalisebibek.com.np/day-of-a-24-235db03c0b5f</link><guid isPermaLink="true">https://www.chalisebibek.com.np/day-of-a-24-235db03c0b5f</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Sat, 28 Dec 2019 11:05:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111088043/rL2glQAIc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>source: internet</p>
<p>“Ok! That’s fine, just one more warning to solve, looks like I finally finished this freaking project.”</p>
<p>Half-past eight and I anyhow had to show my half-asleep, half toxic and half-drunken face to my boss in an hour.</p>
<p>And please don’t ask me about the speed of “Pokhara Nagar Bus”. When a normal user is getting 40 Mbps of internet, the local bus makes us travel with a speed of 256 Kbps. Being a computer-guy, speed matters to me most. Every day I feel like I need to buy a vehicle myself.</p>
<p>Me, always in a dilemma either to seek another job in the capital or reside here congesting myself to these limited opportunities. Only that aspires me to work here is the payment, a satisfactory one, that I cannot get elsewhere for what I am. But what if I could get, no comparisons are done yet, I am too busy to where I am going.</p>
<p>“Holy cow! Only 30 more minutes, wasted half an hour to look like I am ready for work, which I am actually never. Just the same working function on loop, no escape back-route function declared or called.”</p>
<p>“Bibek! I need to talk to you. The client is not happy with your last design.”</p>
<p>“But…………..”</p>
<p>No use of saying, That was their own chosen design, The client themselves doesn’t know what they want.</p>
<p>“What I need to do now, then?”</p>
<p>“Do what they might like and no more complain after this. I want it to be final within a couple of days, we running out of dates.”</p>
<p>“And the project I am currently working on?”</p>
<p>“You have never been to the office on time and these extra excuses. I am not paying you as a senior developer for such things.”</p>
<p>“Oh! Yeah, I almost forgot that. keep yourself your senior tag.” I wish I said it right in his face. When you work with all the interns all around who working without payment, no pride in saying I am senior here.</p>
<p>How can someone think of running a company full of interns?</p>
<p>“Whatsoever, I will never do these kinds of unethical things, whenever I have my own start-up. Just a year more, I gonna spit on his face and move on.”</p>
<p>I never gonna slave someone for sake of few extra grand.</p>
<p>“A coffee and a cigarette.”</p>
<p>“And khana, shall I make it ready by now.”</p>
<p>“I will be here in an hour to eat.”</p>
<p>An infirm lifestyle in an apartment, a dimension of 4 by 4, a bed, a table, junk of wires, a chair full of to be washed clothes and a stove which I particularly don’t get time to lit up. More to it, an ashtray and bottles of hard liquors, my desert for the day.</p>
<p>“Hey Saurav, can you do the validation check to this?”</p>
<p>“I don’t know yet brother, I am learning, can you teach me how to do that?”</p>
<p>“Heck no! I was expecting the same, so I started doing it myself.”</p>
<p>Saurav, one of the interns who joined the office a week ago. His CV was full of expertise in half-dozens programming languages. No hard complaints on his CV, That’s what everyone does. “The classical Hello World output == knowledge of programming language”. He doing what everyone tries to show-off in the early days.</p>
<p>Inside the cubicle, the memes on Instagram and facebook look so relatable. The errors and the debugging jokes, StackOverflow and GitHub realities, and Reddit inspirations, hit me real hard.</p>
<p>One thing, I really care about when working on any project is privacy and security issues, I am never actually a security researcher or a pentester but I need to learn it over time.</p>
<p>My frustration over the work, my working ethics don’t let me release it over the normal users. The un-healthy working mechanism of the IT field in Nepal is itself a bug, it needs to be debugged first for better productivity.</p>
<p>“Ok! That’s fine, just one more warning to solve, looks like I finally finished this freaking project.”</p>
<p>Let me compile and wrap up my working directory.</p>
<blockquote>
<p>bibek@BIbek:~ cd Documents/projects/deadline-December
bibek@BIbek~ /Documents/projects/deadline-December g++ main.cpp -o main
Segmentation fault (core dumped)</p>
</blockquote>
<p>Again, same shit, the pointer issue, I absolutely hate it. One of the issues I wish never to encounter. I don’t know how many hours I am going to waste debugging this one. I am already due late to my deadlines.</p>
<p>“Hey! Just returned home? How was the day?”</p>
<p>When hours of continuous work followed by another hour of bus mess with body and mind both in a dead situation and in no mere condition to deal with further stress, these acts keep the heart kinda ‘alive’.</p>
<p>She was the only one who knows me and the hell I am going through. For family, I cant take a risk of quitting the job, they are proud to call their son working as a software engineer with four 0s salary.</p>
<p>We talk till midnight like there’s nothing more to think of. All my frustrations evaporate like a steam, a cloudy day gets the sunlight and every bit of sunlight strikes me with elegance, the sight of the moon makes me realize the world is joyful like the autumn is never coming and spring never dying.</p>
<p>The only part of my time when the semi-colon can be neglected in the chat lines, the only one who has no date issues, the perfect global variable, she is my number[0] with no increment function set-up.</p>
<p>When I myself is an alien to my own world, black terminal, and the indentation errors, where nightmares and ghosts arose in the form of bugs. All I got, it’s her, the cup of my coffee, a compiler who fixes even the run-time errors and deploys all the awesomeness to frustrations.</p>
<p>Oxytocin, testosterone, caffeine, nicotine, and ethanol working together in my body. A perfect combo to relax my body and soul. I have been an addict to alcohol and nicotine which I never wanted to, but the peering and payloads made me been to.</p>
<p>“Bibek! You again late; And you need to switch to the old design you worked on last week. The client started liking the design again and the color is also beautiful.”</p>
<blockquote>
<p>While (!dead)
{
work();
}</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[When the light’s bit shady]]></title><description><![CDATA[When the light’s bit shady
The thirst in you dwells my land
My land is a slum of unprotected souls
Where your traces, protected with rubber.
//
I mastered to lure your hunger
You mastered to bargain mine
Irony, it gets a different name
You head up an...]]></description><link>https://www.chalisebibek.com.np/when-the-lights-bit-shady-f6c62fdf801e</link><guid isPermaLink="true">https://www.chalisebibek.com.np/when-the-lights-bit-shady-f6c62fdf801e</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Thu, 14 Nov 2019 07:13:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111138619/yhwmODBiL.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When the light’s bit shady</p>
<p>The thirst in you dwells my land</p>
<p>My land is a slum of unprotected souls</p>
<p>Where your traces, protected with rubber.</p>
<p>//</p>
<p>I mastered to lure your hunger</p>
<p>You mastered to bargain mine</p>
<p>Irony, it gets a different name</p>
<p>You head up and I lost in day life-line.</p>
<p>//</p>
<p>Small me, hoped for an arm to rest</p>
<p>Every arm I now rest, sprinkle my chest</p>
<p>Rotten I ate were enough to live,</p>
<p>Fresh I now eat tears me down to hell.</p>
<p>//</p>
<p>Sometimes, a hope abides on a full moon</p>
<p>Light full of satisfaction asides</p>
<p>Canopies build stoned with affection</p>
<p>Which darkness pours the art, I reside.</p>
<p>//</p>
<p>The world rounds, the stars twinkle</p>
<p>So will the glooming moon shines,</p>
<p>Me sleeping with a smile underneath it</p>
<p>A craft that no one can take away.</p>
<p>//</p>
<p>Note: This poem is written taking a prostitute as the main character.</p>
]]></content:encoded></item><item><title><![CDATA[Why the Government of Nepal should invest more in IoT?]]></title><description><![CDATA[Source: Internet
Overview:
Digitalization of the world has already almost reached its zenith and with evolving technologies like Artificial Intelligence, Machine Learning, Blockchain, Automated Technology, Augmented Reality, Digital Twin, 5G Network,...]]></description><link>https://www.chalisebibek.com.np/why-the-government-of-nepal-should-invest-more-in-iot-d6ccf3970011</link><guid isPermaLink="true">https://www.chalisebibek.com.np/why-the-government-of-nepal-should-invest-more-in-iot-d6ccf3970011</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Tue, 22 Oct 2019 06:00:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111131852/kBdEo1oQL.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Source: Internet</p>
<h2 id="overview">Overview:</h2>
<p>Digitalization of the world has already almost reached its zenith and with evolving technologies like Artificial Intelligence, Machine Learning, Blockchain, Automated Technology, Augmented Reality, Digital Twin, 5G Network, and IoT, the template of human civilization is at its peak. If only we could put our mind into something extraordinary and then boom, technology comes into play, ideas are executed and things can happen.</p>
<p>Even though there`s a worldwide impact of such technologies, Nepal has still been very behind to use them for betterment. We have always relied on traditional technological tools that haven’t been productive for us. Nepal hasn’t yet invested in a prime technology which as a result has made us unable to utilize our own potential. So, it’s already time for us to take some bold moves and cultivate an environment for cutting edge technology. The Internet of Things could be the ideal one with which we can boost our way ahead.</p>
<h2 id="internet-of-things">Internet of Things:</h2>
<p>Wikipedia defines The <strong>Internet of things</strong> (<strong>IoT</strong>) as the extension of internet connectivity into physical devices and everyday objects. IoT enables devices to interact with the internet embedded with electronics and sensors which can be controlled and monitored remotely. On a large scale, it covers other fields like AI, embedded systems and automation. It embraces traditional technologies to advanced form. This technology came to highlight a decade ago and is one of the technological trends of today.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111129639/afuC7zq3n.jpeg" alt="Source: Internet" /><em>Source: Internet</em></p>
<h2 id="internet-of-things-and-nepal">Internet of Things and Nepal:</h2>
<p>As mentioned above, IoT could be the ideal technological investment for a country like Nepal. Progressively, things are changing. The world is taking benefit from IoT, so it’s high time to take action. Some of the ways the Nepal Government can take benefit from IoT are mentioned below;</p>
<ul>
<li><p>One of our aims is to create smart homes, smart cities, smart nation, smart cards, and ultimately smart citizens. For the very purpose, we must do some out of the box thinking and executions. The cities must be equipped with IoT controlled Traffic lights, smart parking areas, WiFi access areas, and properly planned transportation facilities. The real data manipulation and analysis will be the heart of such smart hubs. In such a scenario without the use of IoT powered technology, these concepts will only remain as words, nothing much.</p>
</li>
<li><p>As a well-known fact, every year Nepal faces huge losses due to calamities like floods, landslides and many more. The shattering earthquake is still on our minds. Different Countries have proven to out predict them meaning its already known if the calamities will strike or not but not us, we haven’t been able to do so. It all comes down to the use of mind and tech. If we can install IoT based sensors in dams and riversides, have IoT based, sensors that can analyze the water level and speed in real-time, sensors that can calculate the speed of wind-blowing and monitor rainfall over the certain range with GIS and satellite communication could be handy in the prediction of such natural catastrophes. The following awareness about the upcoming event of disasters could save hundreds of lives and millions of wealth. Not only this, real-time communication is what IoT potent to have.</p>
</li>
<li><p>Nepal is an agrarian country and agriculture is the sole of the income of our country. But we are facing the same problem in this sector too. We are using old-traditional tools and mechanisms in agriculture. In Agriculture, we can also Integrate IoT. If we use the Internet of Things, we can obtain real-time data of conditions like humanity, air, soil and condition fo crops. The health condition of animals can be obtained and it can be improved with the use of technology. Similarly, with the smart irrigation concept, the dependency of monsoon can be ended as technology can self-measure the requirements of crops with time resulting in high productivity with less functional costs.</p>
</li>
<li><p>The travel and tourism industry can have a high prospect if there is a proper use of IoT based technologies. The IoT based technology can help the tourists to personalize their needs which ultimately makes the greater customer experience. Not only this they can have seamless transportation controlled by their own mobile device when the connectivity is maximized. They can travel with ease with proper location information, enjoy tourism-related sports and even get rescued in case of need.</p>
</li>
<li><p>In addition to the above matters, IoT will create an environment for new investment in the country that will unlock our potential to interact with third countries of the world.</p>
</li>
</ul>
<p><strong>Conclusion:</strong></p>
<p>The data collected in real-time will be beneficial and act as a data set for future references. The data so collected when used in ML and AI-based algorithms will make a significant result. Such results could be helpful in the planning of developmental projects. IoT is future technology and the Nepal government must invest in it for the sake of competing with other countries. Long in the future, wars and competitions will be backed by cyber-weapons, and IoT is one of them for developing countries like Nepal, which can serve on high note only if we invested some time and manpower in it.</p>
]]></content:encoded></item><item><title><![CDATA[November Rain]]></title><description><![CDATA[A blue kurta girl soaked in rain, trying not to further. Was she playing hide and seek with the rain? Aah, nostalgic.

How someone could feel so much of close just in fickle of seconds” She did that.

Source: Internet
Slight drizzle followed by the s...]]></description><link>https://www.chalisebibek.com.np/november-rain-a98a93ca749d</link><guid isPermaLink="true">https://www.chalisebibek.com.np/november-rain-a98a93ca749d</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Tue, 10 Sep 2019 03:22:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111147965/pXGOvxk8X.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A blue kurta girl soaked in rain, trying not to further. Was she playing hide and seek with the rain? Aah, nostalgic.</p>
<blockquote>
<p>How someone could feel so much of close just in fickle of seconds” She did that.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111145940/jBa9vGCiJ.jpeg" alt="Source: Internet" /><em>Source: Internet</em></p>
<p>Slight drizzle followed by the sudden cats and dogs making the way for a downpour. unusual in a way, November doesn’t call cloudburst. But this time, it did with a sense of humor in the air.</p>
<p>Pokhara, badly known for untimely rainfall added one more incident to its hall of fame. I don’t call it an honor, I better call it disgust. Basically, I hate rain, it disturbs isolation which I prefer the most, I believe in Punctuality. Rainfall sucks the hell out of the word “Punctuality”, I am not that stereotypes who says, it natural. But Nature needs to be punctual, right? Nevermind, I don’t need your answer. All I need is that coffee I ordered a few minutes ago.</p>
<p>I am possibly in the best coffeehouse in Pokhara. If I am addicted to something, it’s always been caffeine and fighting blank paper to ink love and lust thoughts. I remember once someone asked me why I am so dedicated to these niches. I replied, ‘“Everyone fantasizes what they haven’t got yet”. Enough inner-me talks. Let’s return to the moment. Finally here comes my coffee Mocha, all thanks to the punctual waitress.</p>
<p>Grabbing my umbrella and my airpods I started walking home. A blue kurta girl soaked in rain, trying not to further. Was she playing hide and seek with the rain? Aah, nostalgic. My legs paralyzed with that sight of her. An oval face with celestial nose in between those upturned eyes perfectly set upon the base of heart faced lips. On top of that, her faux waterfall hairstyle of which few passing by the middle of right eyes even after she collects and accumulates with rest. Really an adorable face to remember under the rainfall when minimal light is lighting up the place. Indeed I saw a charming face to craft a poem on.</p>
<p>“Oh! I am………”</p>
<p>“<em>Hello Are your listening”</em></p>
<p><em>/</em>A deep silence<em>/</em></p>
<p>A wave, in fact, a strong wave to make sure I am still there. She lends me a hand to shake, I did as she says.</p>
<p><em>“And yours?”</em></p>
<p><em>“?”</em></p>
<p><em>“Name?”</em></p>
<p><em>“? Name”</em></p>
<p><em>“Is there a name or something that you are called by?”</em></p>
<p><em>“Oh! Sorry, Bibek.”</em></p>
<p>Had she told her name? Shit, shit I missed it.</p>
<p>Even in the rain, the heat is melting me down. My heart’s racing and feet shivering. This has never happened before.</p>
<p>/<em>Silence for a minute</em>/</p>
<p><em>“Bibek, Bibek”</em>, a sudden striking pat on my shoulder. Back to reality. I did lose my past, where she told her name. If I could travel that alternate dimension and relive that past, nevertheless it is just another word to sympathize with the reality.</p>
<p><em>“You okay with me sharing your umbrella, looks like you are dead lost”. </em>I wasn’t I conformed.</p>
<p><em>“So you too love rain.”</em></p>
<p>I remained silent.</p>
<p>“<em>I understood, you don’t want to talk to a stranger like me”</em></p>
<p>“<em>No, it’s not like that.”</em></p>
<p><em>“Just kidding,’’ she said.</em></p>
<p>Yes, she isn’t a stranger, she is at present the most known girl of my life. I myself am a stranger, it feels like I know her for centuries. How someone could feel so much of close just in fickle of seconds” She did that, I am inflated by her. She made me lost within the moment.</p>
<p><em>“So you love rain as I do.”</em></p>
<p><em>“No!”</em> /<em>I shouldn’t have gone with a single word.</em>/</p>
<p><em>“Why?”</em></p>
<p><em>“Just don’t like.”</em></p>
<p><em>“Oh! Boy.”</em></p>
<p>She grabbed my hand, took the shaft of my umbrella and took away from me. All of sudden, those freaking cubic centimeters water drops engulfed me. The rain made a way to the ground through my body. I wet, the same unpunctual rainfall poured its existence upon me.</p>
<p>But more to this, I didn’t feel bad about it because she was there enjoying it. The rain was happy to finally found its counterpart in hide and seek. Now, will the rain die and she will act as a seeker or its just a one-way game. She always hides, just the rain seeks her to furnish its power within her. Whatsoever is this game, she is enjoying.</p>
<p><em>“Do you know, I have this strange allergy to rainfall.</em></p>
<p><em>My body can’t endure rainwater for long, sulfate ions are dreadful for my skin. Yet, I like to soak up in the rain just because my soul loves to explore the world within each raindrop. It’s funny how my body and soul diverge. I listen to my heart. Such a weirdo me, right?”</em></p>
<p>/<em>She continued.</em>/</p>
<p><em>“I love the rain for various reasons. It molds us life learning lessons, It verse us to be pure, to adapt to the changes around us, to let go and live life to the fullest. In fact, it makes us realize whatever we think we are, we will all end up beneath the ground. Why suffer and show stingy nature when the end is sure.”</em></p>
<p>“Did you enjoy wetting in the mizzle.”</p>
<p>I nodded.</p>
<p><em>“Oh! Silencer boy, don’t your heart says to talk. You are something different.”</em></p>
<p>I didn’t realize I had no time. If only I had known, maybe I`d speak some words and make a conversation with her. But soon she said, <em>“Here’s my place; now I have to go. Thanks for the moment Bibek. You are surely a person to remember. See you. Bye”</em></p>
<p>After she was gone, some part of me hated myself. I should’ve talked, she seemed as if i knew her forever. Thoughts started popping in mind. <em>“Are we going to meet again? When, how, why? “What you seek is seeking you”, </em>but I`m not even sure if I do seek her. I’ve always somehow hated rain, so is the way of nature telling me guilty?</p>
<p>I couldn’t help but think this is me. I always ruin good things. Hell, how can I not like these rainfalls? I always pretend to be aloof, a dysfunctional society survivor and maybe I`m. Although even just for a moment, I feel like I lost myself there.</p>
<p>“Above the traces of crying clouds</p>
<p>Two souls hooding the same canopy</p>
<p>Piercing boundaries of strangers</p>
<p>Exploding themselves in each drop of rain………”</p>
]]></content:encoded></item><item><title><![CDATA[Cold war for 5G!]]></title><description><![CDATA[“War for 5G would been much better if it was between telecommunication companies, but it’s been more than that, it’s a war of political dominance.”

After century of first world war, a new war for supremacy has started between two countries again. Ch...]]></description><link>https://www.chalisebibek.com.np/cold-war-for-5g-40fb5d8c8f6</link><guid isPermaLink="true">https://www.chalisebibek.com.np/cold-war-for-5g-40fb5d8c8f6</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Mon, 10 Jun 2019 02:51:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111098132/FJNoZQCti.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>“War for 5G would been much better if it was between telecommunication companies, but it’s been more than that, it’s a war of political dominance.”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1622111095489/th1uVCSlf.jpeg" alt /></p>
<p>After century of first world war, a new war for supremacy has started between two countries again. China and America are on a cold war to gain financial as well as technological domination around the world. And the next big weapon for that is ‘5G’ technology.</p>
<p>According to 3GPP(The third generation partnership project), which is associated to maintain standard protocols to mobile technology and works for maintenance and development of GSM and related 2G, 2.5G etc, 5G technology is defined as any system using 5GNR(5G new radio). The speed of 5G will be 100 times to that 4G, which will lower the lag-time by 5 times, increases mobile data by 1000 times and enable new capabilities to unlock innovation. In simple language, it’s less the concern of download speed and more of a game changer.</p>
<p>The first fairly substantial deployments were in April 2019. SK Telecoms, KT corporations and LG U PLUS, Telecommunications operators based on South Korea were among the first to install base station. Verizon, also claimed to provide services in two of the major cities of USA. But that was initial testing, the final deployment hasn’t been made yet. China and America are on the track to be the first to provide 5G service around their respective nations. Whoever wins the race is likely to dominate both technological and financial market.</p>
<p>With 4G technology only, America increased their GDP by $100 billions in 2016 which created more jobs, lowered costs of consumer goods and helped US to roughly earned around $125 billions in revenue. And 5G will have even bigger impact to this sector around the globe. It is estimated that 5G promises to make benefit of around $12 trillion and America expect to make benefit of $3.5 trillion alone. So being first will eventually make change in economic sector.</p>
<p>The technology advancement from 4G to 5G is just not limited to number, it’s impact is way beyond that. There are many technological projects around the world whose effective functionality is not possible by the band-width and speed of 4G technology can finally be out of prototype. VIrtual Reality will finally be a reality, wireless technology will substantially replace the cable technology, AI based technologies will be empowered effectively, gaming experience will be even better, Health care will have improvement with better tracking of patience health condition and much more, Internet Of Things will be used with ease, “smart city, smart home, smart citizens and smart living” concepts will be propelled. More or Less, The world will be same as the movies based on future years to come.</p>
<p>China has invested $17.7 billion in capital so far and added more than 350,000 wireless facilities since 2015, giving the country about 1.9 million wireless sites. Not only this China has planed to invest roughly around $500 billion in 5G-related investment over coming five years. Similarly, America is about to invest $100 billion till 2020.</p>
<blockquote>
<p><strong>For months, Australia had warned the United States about the destructive capacity of 5G technology. Now, America is aggressively campaigning against Chinese telecom champion Huawei, fearful Beijing’s domination of 5G could be used for espionage and sabotage.*</strong> *Huawei, giant technological corporation based on China has been banned by US following allegation of theft of intellectual property, obstruction of justice and fraud related to its alleged evasion of US sanctions against Iran. That will surely slow the race for the China at least for few times. When US banned Huawei, Chinese rejected Apple product which divided the giants symmetrically. The gap of mutual understanding has made the race even beautiful for third nations.</p>
</blockquote>
<p><strong>“In this 5G war, we can tell that no matter for a country or a company, everyone is striving to develop 5G capacities internally while keeping an eye on the external competition.” An statement written in Official Huawei consumer page.</strong></p>
<p>According to the report conducted by CTIA, the US leads with 92 commercial 5G deployments planned for 2019 but noted that China’s wireless operators are conducting “hundreds of large-scale 5G trials across the country.” But the China has has infrastructural advantage over the US as China has more than 14 cell sites per 10000 people compared to 4.7 of US.According to the survey, South Korea ranked third in 5G readiness, followed by Japan, Britain and Italy. But there is a saying in English “pioneers take the arrows, settlers take the land”, that means whoever is leading now, one that settles for the long will eventually win the race.</p>
<p>Refrences:</p>
<ol>
<li><p>Wikipedia (<a target="_blank" href="http://www.wikipedia.com">www.wikipedia.com</a>)</p>
</li>
<li><p>Google (<a target="_blank" href="http://www.google.com">www.google.com</a>)</p>
</li>
<li><p>Forbes (<a target="_blank" href="http://www.forbes.com">www.forbes.com</a>)</p>
</li>
<li><p>Reuters (<a target="_blank" href="http://www.reuters.com">www.reuters.com</a>) etc.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[What I demand]]></title><description><![CDATA[What I demand?
My demand is to control suicide, my demand is to let youth be young generation.
I am young, full of enthusiasm, full of passion, full of awareness, little awkward, less religious, influenced by western perception, trapped by thoughts, ...]]></description><link>https://www.chalisebibek.com.np/what-i-demand-c2b183fd37b3</link><guid isPermaLink="true">https://www.chalisebibek.com.np/what-i-demand-c2b183fd37b3</guid><dc:creator><![CDATA[Bibek Chalise]]></dc:creator><pubDate>Sat, 28 Oct 2017 17:12:23 GMT</pubDate><content:encoded><![CDATA[<p>What I demand?</p>
<p><em>My demand is to control suicide, my demand is to let youth be young generation.</em></p>
<p>I am young, full of enthusiasm, full of passion, full of awareness, little awkward, less religious, influenced by western perception, trapped by thoughts, enhanced by technology, careless of others thought. But ultimately I am a youth generation blooming in this garden of technology. I hold a thought that is different from my parents, I am sometimes addicted by modernization. But I carry an existence. I got a body composed of millions of cells, functioned by dozens organs, controlled by master brain and my blood is also red as you. So I am also a human and little animal as you are. I see things, feels things, understand things, though I don’t react much. But at least I hold sense. So please accept me as a part of this society, this surrounding.</p>
<p>I am not a terrorist, I will not hijack any Boeing to address my demands. I believe in peace and mutual understanding. But that doesn’t mean that you can ignore my innocence. I can be violent and fire up this entire world with my ideas, thought and anger. Still I hope that day never comes.</p>
<p>I believe in some ideology. But that doesn’t mean that I am either a bling follower of communism, socialism, capitalism. I believe in existence of all these ideologies within me. What I demand is to keep me in mainstream leading position not in the protest, I am not your party carter. What I am is an young entrepreneur seeking business terminals, a medical student seeking less politically influenced medical seat, a village boy seeking opportunities to serve my motherland, a young women seeking safe night travel, a young so called low caste lad seeking equality in reality rather than in constitution, s young citizen seeking stable country. What I demand is proper value of my vote.</p>
<p>I demand attention. But that can’t clarify my thinking process, that don’t prove that I am an attention seeker. It’s true that I don’t like to seat in the first bench, I don’t understand the laws of physics, Literature is difficult to understand and history feels hard to remember. But I request not to underestimate me not as a top student. I am bright like those top students jut I need a little bit sunlight. Teachers make me bright like a diamond, you be the sun and I receive every part of that positive energy. But don’t burn me down in terms of my terminal grades. One single piece of paper is really not enough to rank my future. What I demand is proper schooling not scolding, what I want is proper teaching not teasing, I just demand positive inspiration. I wish I could be another great person and eradicate the feeling that degree don’t teach you to live life. I have faith in you teachers.</p>
<p>I know life is a choice but death is the decision. But I am still compelled to hang myself, commit suicide. I enjoy enjoying, I love loving, and live living. I know greenery around me, I know the happiness, I inspire myself to face obstacles, I comfort myself to challenge my comfort zone, I am not weak by thought or action, I know I possess a strong guts. But every time I see me taking excessive sleeping pills or burning down myself, I just don’t leave this physical world, I raise a big question to this civilization, “Who was responsible for my self-destruction, who else were the one to put rope in my neck, who else participated in planning of my death?” I believe life is harsh. But I wasn’t that much foolish to kill myself. I did it because you compelled me. Suicide was the lone option left for me, there’s no another option else. Aint you paved this path for me? I dare you to answer this and this time I demand your loyalty.</p>
<p>Lastly I plead not to let another young blood to be such harsh on himself. This is my final demand and more of request. Dear! Society, political leaders, teachers and parents, don’t let me fall of the ground. Please let me stand around me.</p>
]]></content:encoded></item></channel></rss>