Fighting Spam emails fragments from internet

How to stop people from using my domain to send spam?

Since it hasn’t been explicitly stated yet, I’ll state it.

No one’s using your domain to send spam.

They’re using spoofed sender data to generate an email that looks like it’s from your domain. It’s about as easy as putting a fake return address on a piece of postal mail, so no, there’s really no way to stop it. SPF (as suggested) can make it easier for other mail servers to identify email that actually comes from your domain and email that doesn’t, but just like you can’t stop me from putting your postal address as the return address on all the death threats I mail, you can’t stop someone from putting your domain as the reply-to address on their spam.

SMTP just wasn’t designed to be secure, and it isn’t.


for the postal mail analogy. That’s the one I always use with non-technical people. Nobody has to break into your house to send an outgoing piece of mail with your return address on it. They just need to be able to drop it into a mailbox.


To be more correct, you should say: No one’s using your (domain’s) server to send spam. Because they do use the domain, namely as the FROM-address. Of course SPF is no barrier at all, because the sender will use a hop-server which does not do SPF checks. Solution would be simple: The responsible Server for the TO-address should reject with 450 to the server the mail originates, not to send a DSN to the server which is responsible for the FROM-address


Sender Policy Framework (SPF) can help. It is an email validation system designed to prevent email spam by verifying sender IP addresses. SPF allows administrators to specify which hosts are allowed to send mail from a given domain by creating a specific SPF record (or TXT record) in the Domain Name System (DNS). Mail exchangers use the DNS to check that mail from a given domain is being sent by a host sanctioned by that domain’s administrators.


It doesn’t look like a SPF would have helped in this particular example. A machine that was bothering to check SPF records to reject mail is unlikely to be so broken as to accept mail for a nonexistent domain, then decide it can’t deliver it and generate the bounce message. If, the machine accepting mail for, had bounced it properly, your bounce message would be coming from a Google SMTP server.

Sadly, this kind of broken behavior (accepting and later generating a bounce) is not that uncommon. There is not a thing you can do to keep some random machine from pretending it has a message to deliver from your domain. There is also not a thing you can do about delayed bounces.

You can, however, have less of these blowback bounces to read. If 7E949BA is not a real user on, as I suspect it may not be, you’re getting the bounce message because you have your catch-all address enabled. A catch-all means that your domain will accept email for any non-existent user and deliver it to you. This is primarily a good way to grow your collection of spam and bounce messages. In Google Apps, to configure your catch-all, it’s under “Manage this domain” -> Settings -> Email, about halfway down.


An idea not yet mentioned is to reject the backscatter. All of it that I’ve seen comes through open mail relays, and there are two blackhole lists which you may find useful for reducing the amount of backscatter you receive.

  • Backscatterer is a DNSBL which explicitly lists SMTP servers that send backscatter and sender callouts.
  • RFC-Ignorant is a DNSBL which lists SMTP servers that do not obey various important RFCs.

Adding these in (along with several other more traditionally focused BLs) reduced the amount of backscatter that I receive by over 90%.


Fighting Spam – What can I do as an: Email Administrator, Domain Owner, or User?

How to import PST file into Office 365 – step by step with screenshots

If you are trying to import PST backup file to another email or a new Office 365 tenant,  you probably want to find a user-friendly Microsoft build-in feature to accomplish this. So you check the official help document and you will get this:

no build-in feature


Yes, you are right! It is annoying! There is still no command line-free feature for doing this in 2020. I couldn’t find a very detailed step-by-step instruction with key screenshots for this so I edit this post for saving my time and your time in the future.

In this article, I will explain how to use Azure AzCopy for importing PST files into Office 365. It is useful when you have multiple mailboxes to import. The network upload is free but uploading may take some time.

Assign the Mailbox Import Export role to Global Admins

Before we can start importing you will need to assign the Mailbox Import Export role to the Global Admins in Exchange Online.

1001Open the Exchange Admin Center through, and go to Permissions.

  • In the admin roles select Organisation Management.
  • Click on the plus to add a role
  • Select the Mailbox Import Export role and click Ok

Creating a New Import Job


Go to and click on Information governance > import.

  • Select Import PST Files
  • Click on New Import Job
  • Choose Upload 

On the summary page, we will find the SAS URL that we need later and a download link to the Azure AzCopy tool.

Download and install the Azure AzCopy tool and copy the SAS Url.

You can leave the screen open or click Cancel and Continue later on.

Uploading the file

After you have installed the Azure AzCopy tool you can open it. The tool is only a command-line tool to upload the files. Yes, you are right! Annoying!

The upload tool will upload all the pst files in the source folder, do NOT select a single pst file. Use the following command to upload the files.

AzCopy.exe /Source:"<Location of PST files>" /Dest:"<SAS URL>" /V:"<Log file location>" /Y

*Do NOT forget the quotes around the source and dest paths, otherwise you will get an error that the syntax is incorrect.


Depending on the size of the files it will take some time to upload the pst files.

Checking the uploaded PST Files (optional)


After the files are uploaded you might want to check it your self if all the pst files are uploaded to the Azure Storage. Download the Azure Storage Explorer and install it.

After you opened the Storage Explorer you will need to add an account.

  • Click on Add an account…
  • Select Use a shared access signature (SAS) URI
  • Pass the SAS URI that we got from the import job
  • Click Next and Connect
After you have verified the PST files you can close the Storage Explorer. Make sure you right-click the ingestiondata(or the named you assigned) in the explorer and click Detach to disconnect. Otherwise, you will get an error the next time you want to open it.

Mapping the PST Files

The next step is to map the PST files to the mailbox in Office 365. You can use a CSV file this, a template for it can be downloaded here at Microsoft.

In the CSV file you will find a couple of columns/parameters:

  • Workload – For pst import to the user’s mailbox, leave this on Exchange
  • FilePath – If you only have pst file in the root folder, then you can leave this empty.
  • Name – The name of the pst file (XXX.pst)
  • Mailbox – The mailbox you want to import to
  • IsArchive – Import the pst file to the user’s archive folder. Set it to true or false.
  • TargetRootFolder – If you leave this blank, then the files will be imported to a new folder named Imported in the user’s mailbox. If you want to merge the data with the existing folders then use only  /
  • You can ignore the other fields, just leave them empty.

Enabling archive mailbox in Office 365

to enable archive


If you didn’t enable the archive mailbox you will get the following error:

enable archive folder and create a new import

Completing the import

After you have mapped all the pst files we can complete the import job. Go back to and open the import job.

  • Select I’m done uploading my files and I have access to the mapping file
  • Click Next and select the mapping file (csv file)
  • Click on Validate to validate the csv file
  • Click on Save when the validation is complete

Your new import job will be analyzed before you can actually start importing. Again, depending on the size this can be done a couple of minutes or longer. Click refresh to check the status.

Follow the screens to start the import.

DATA FILTERINGYou can now filter the data or if you just want to import everything, simply tick what you want and click on Next to start the import job.

The import process may take a long time and you may see the In progress -0% unchanged even if you close or refresh the job screen and check back later. Yes, you are right! Annoying!

PyTorch for Deep Learning


In 2019, the war for ML frameworks has two main contenders: PyTorch and TensorFlow. There is a growing adoption of PyTorch by researchers and students due to ease of use, while in industry, Tensorflow is currently still the platform of choice.

Some of the key advantages of PyTorch are:

  • Simplicity: It is very pythonic and integrates easily with the rest of the Python ecosystem. It is easy to learn, use, extend, and debug.
  • Great API: PyTorch shines in term of usability due to better designed Object Oriented classes which encapsulate all of the important data choices along with the choice of model architecture. The documentation of PyTorch is also very brilliant and helpful for beginners.
  • Dynamic Graphs: PyTorch implements dynamic computational graphs. Which means that the network can change behavior as it is being run, with little or no overhead. This is extremely helpful for debugging and also for constructing sophisticated models with minimal effort. allowing PyTorch expressions to be automatically differentiated.

There is a growing popularity of PyTorch in research. Below plot showing monthly number of mentions of the word “PyTorch” as a percentage of all mentions among other deep learning frameworks. We can see there is an steep upward trend of PyTorch in arXiv in 2019 reaching almost 50%.

arXiv papers mentioning PyTorch is growing

Dynamic graph generation, tight Python language integration, and a relatively simple API makes PyTorch an excellent platform for research and experimentation.


PyTorch provides a very clean interface to get the right combination of tools to be installed. Below a snapshot to choose and the corresponding command. Stable represents the most currently tested and supported version of PyTorch. This should be suitable for many users. Preview is available if you want the latest version, not fully tested and supported. You can choose from Anaconda (recommended) and Pip installation packages and supporting various CUDA versions as well.

PyTorch Modules

Now we will discuss key PyTorch Library modules like TensorsAutogradOptimizers and Neural Networks (NN ) which are essential to create and train neural networks.


Tensors are the workhorse of PyTorch. We can think of tensors as multi-dimensional arrays. PyTorch has an extensive library of operations on them provided by the torch module. PyTorch Tensors are very close to the very popular NumPy arrays . In fact, PyTorch features seamless interoperability with NumPy. Compared with NumPy arrays, PyTorch tensors have added advantage that both tensors and related operations can run on the CPU or GPU. The second important thing that PyTorch provides allows tensors to keep track of the operations performed on them that helps to compute gradients or derivatives of an output with respect to any of its inputs.

Tensor refers to the generalization of vectors and matrices to an arbitrary number of dimensions. The dimensionality of a tensor coincides with the number of indexes used to refer to scalar values within the tensor. A tensor of order zero (0D tensor) is just a number or a scalar. A tensor of order one (1D tensor) is an array of numbers or a vector. Similarly a 2nd-order tensor (2D)is an array of vectors or a matrix.

Now let us create a tensor in PyTorch.

After importing the torch module, we called a function torch.ones that creates a (2D) tensor of size 9 filled with the values 1.0.

Other ways include using torch.zeros; zero filled tensor, torch.randnfrom random uniform distribution.

Each tensor has an associated type and size. The default tensor type when you use the torch.Tensor constructor is torch.FloatTensor. However, you can convert a tensor to a different type (floatlongdouble, etc.) by specifying it at initialization or later using one of the typecasting methods. There are two ways to specify the initialization type: either by directly calling the constructor of a specific tensor type, such as FloatTensor or LongTensor, or using a special method, torch.tensor(), and providing the dtype.

To find the maximum item in a tensor as well as the index that contains the maximum value. These can be done with the max() and argmax() functions. We can also use item() to extract a standard Python value from a 1D tensor.

Most functions that operate on a tensor and return a tensor create a new tensor to store the result. If you need an in-place function look for a function with an appended underscore (_) e.g torch.transpose_ will do in-place transpose of a tensor.

Converting between tensors and Numpy is very simple using torch.from_numpy & torch.numpy().

Another common operation is reshaping a tensor. This is one of the frequently used operations and very useful too. We can do this with either view() or reshape():

Tensor.reshape() and Tensor.view() though are not the same.

  • Tensor.view() works only on contiguous tensors and will never copy memory. It will raise an error on a non-contiguous tensor. But you can make the tensor contiguous by calling contiguous() and then you can call view().
  • Tensor.reshape() will work on any tensor and can make a clone if it is needed.

PyTorch supports broadcasting similar to NumPy. Broadcasting allows you to perform operations between two tensors. Refer here for the broadcasting semantics.

Tensor in a nutshell: What, How are Where

Three attributes which uniquely define a tensor are:

dtype: What is actually stored in each element of the tensor? This could be floats or integers etc. PyTorch has nine different data types.

layout: How we logically interpret this physical memory. The most common layout is a strided tensor. Strides are a list of integers: the k-th stride represents the jump in the memory necessary to go from one element to the next one in the k-th dimension of the Tensor.

device: Where the tensor’s physical memory is actually stored, e.g., on a CPU, or a GPU. The torch.device contains a device type ('cpu' or 'cuda') and optional device ordinal for the device type.


Autograd is automatic differentiation system. What does automatic differentiation do? Given a network, it calculates the gradients automatically. When computing the forwards pass, autograd simultaneously performs the requested computations and builds up a graph representing the function that computes the gradient.

PyTorch tensors can remember where they come from in terms of the operations and parent tensors that originated them, and they can provide the chain of derivatives of such operations with respect to their inputs automatically. This is achieved through requires_gradif set to True.

t= torch.tensor([1.0, 0.0], requires_grad=True)

After calculating the gradient, the value of the derivative is automatically populated as a grad attribute of the tensor. For any composition of functions with any number of tensors with requires_grad= True; PyTorch would compute derivatives throughout the chain of functions and accumulate their values in the grad attribute of those tensors.


Optimizers are used to update weights and biases i.e. the internal parameters of a model to reduce the error. Please refer to my another article for more details.

PyTorch has an torch.optim package with various optimization algorithms like SGD (Stochastic Gradient Descent), Adam, RMSprop etc .

Let us see how we can create one of the provided optimizers SGD or Adam.

Without using optimizers, we would need to manually update the model parameters by something like:

We can use the step() method from our optimizer to take a forward step, instead of manually updating each parameter.

The value of params is updated when step is called. The optimizer looks into params.grad and updates params by subtracting learning_rate times grad from it, exactly as we did in without using optimizer.

torch.optim module helps us to abstract away the specific optimization scheme with just passing a list of params. Since there are multiple optimization schemes to choose from, we just need to choose one for our problem and rest the underlying PyTorch library does the magic for us.

Neural Network

In PyTorch the torch.nn package defines a set of modules which are similar to the layers of a neural network. A module receives input tensors and computes output tensors. The torch.nn package also defines a set of useful loss functions that are commonly used when training neural networks.

Steps of building a neural network are:

  • Neural Network Construction: Create the neural network layers. setting up parameters (weights, biases)
  • Forward Propagation: Calculate the predicted output. Measure error.
  • Back-propagation: After finding the error, we backward propagate our error gradient to update our weight parameters. We do this by taking the derivative of the error function with respect to the parameters of our NN.
  • Iterative Optimization: We want to minimize error as much as possible. We keep updating the parameters iteratively by gradient descent.

Build a Neural Network

Let us follow the above steps and create a simple neural network in PyTorch.

We call our NNNet hereWe’re inheriting from nn.Module. Combined with super().__init__() this creates a class that tracks the architecture and provides a lot of useful methods and attributes.

Our neural network Net has one hidden layer self.hl and one output layer self.ol.

This line creates a module for a linear transformation with 1 inputs and 10 outputs. It also automatically creates the weight and bias tensors. You can access the weight and bias tensors once the network net is created with net.hl.weight and net.hl.bias.

We have defined activation using self.relu = nn.ReLU() .

PyTorch networks created with nn.Module must have a forward() method defined. It takes in a tensor x and passes it through the operations you defined in the __init__ method.

We can see that the input tensor goes through the hidden layer, then activation function (relu), then the output layer.

Here we have to calculate error or loss and backward propagate our error gradient to update our weight parameters.

A loss function takes the (output, target) and computes a value that estimates how far away the output is from the target.There are several different loss functions under the torch.nn package . A simple loss is nn.MSELoss which computes the mean-squared error between the input and the target.

A simple function callloss.backward() propagates the error. Don’t forget to clear the existing gradients though else gradients will be accumulated to existing gradients. After callingloss.backward() have a look at hidden layer bias gradients before and after the backward call.

So after calling the backward(), we see the gradients are calculated for the hidden layer.

We have already seen how optimizer helps us to update the parameters of the model.

Please be careful not to miss the zero_grad() call. If you miss calling it, gradients would get accumulated at every call to backward, and your gradient descent will not converge. Below a recent tweet from Andrej shows the frustration and the time it can take to fix such bugs.

Now with our basic steps (1,2,3) complete, we just need to iteratively train our neural network to find the minimum loss. So we run the training_loop for many epochs until we minimize the loss.

Let us run our neural network to train for input x_t and targety_t.

We call training_loop for 1500 epochs an pass all other arguments like optimizermodelloss_fn inputsand target. After every 300 epochs we print the loss and we can see the loss decreasing after every iteration. Looks like our very basic neural network is learning.

We plot the model output (black crosses) and target data (red circles), the model seems to learn quickly.

So far we discussed the basic or essential elements of PyTorch to get you started. We can see how modular the code we build with each component providing the basic blocks which can be further extended to create a machine learning solution as per our requirements.

Creating machine learning based solutions for real problems involves significant effort into data preparation. However, PyTorch library provides many tools to make data loading easy and more readable like torchvisiontorchtext and torchaudio to work with image, text and audio data respectively.

Training machine learning models is often very hard. A tool that can help in visualizing our model and understanding the training progress is always needed, when we encounter some problems. TensorBoard is one such tool that helps to log events from our model training, including various scalars (e.g. accuracy, loss), images, histograms etc. Since release of PyTorch 1.2.0, TensorBoard is now a PyTorch built-in feature. Please follow this and this tutorials for installation and use of TensorBoard in Pytorch.

Thanks for the read. See you soon with another post 🙂





Tips to improve data curation process


A data curation and modeling strategy can ensure accuracy and enhance governance. Experts offer eight best practices for curating data. First, start at the source.

The immense benefits of big data are attainable only when organizations can find ways to manage a massive volume of varied data.

“Most Fortune 500 companies are still struggling to manage their data, or what is called data curation,” said Kuldip Pabla, senior vice president of engineering at K4Connect, a technology platform for seniors and individuals living with disabilities.

Data modeling complements the data curation process by creating a framework to guide how data sets can efficiently and accurately be integrated into new analytics applications.

Pabla said he sees data curation as the management of data throughout its lifecycle, from creation or ingestion until it is archived or becomes obsolete and is deleted. During this journey, data passes through various phases of transformation; data curation ensures that the data is securely stored and that it can be reliably and efficiently retrieved.

It’s important to establish a data curation process that ensures accuracy and data governance, provides security, and makes it easier to find and use data sets. Although technology can help, it’s better to start with a solid understanding of your goals rather than focusing on a particular tool.

1. Plan for accuracy at the source

To ensure accuracy, it’s much easier to validate data at the source rather than to assess its accuracy later. You may need to use different practices for data gathered in-house and data from other sources.

One approach to enduring data accuracy is to ask users to validate their own data; another is to use sampling and auditing to estimate accuracy levels.

2. Annotate and label

It’s easier to manage data sets and troubleshoot problems if the data sets are annotated and labeled as part of the data curation process. This can include simple enrichments, like adding the time and location of an event.

However, “while tagging enriches the data, inaccurate metadata will lead to inaccuracies during transformation or processing of data,” Pabla said.

3. Maintain strong security and privacy practices

Large curated data sets can also pose a risk if they are compromised by hackers or insiders. Good security practices include encryption, de-identification and a strong data governance model.

“At the minimum, CIOs and CTOs can use strong encryptions to encrypt a piece of data in flight and at rest, along with [using] a stronger firewall to guard their cloud infrastructure or data centers,” Pabla said.

Enterprises should also consider separating personally identifiable information from the rest of the data. This makes it easier to safely distribute curated data sets to various analytics teams. Hybrid analytics and machine learning models could even be run between a user’s smartphone or set-top box in a way that provides insight while keeping users in control of their data, Pabla said.

Another way to provide stronger security is to create a strong and effective governance model that outlines who has access to what data — especially raw personal data. The fewer human eyes that have access to data, the more secure it is, Pabla said.

4. Look ahead

It’s important to start the data curation process with the end in mind. Managers need to track how analytics and machine learning apps are using data sets and work backward to improve how the data is aggregated, said Josh Jones, manager of analytics at Aspirent, an analytics consulting service. This includes maintaining at least three periods of time for trending data.

It’s also good to build repeatable, transparent processes for how you clean the data. This enables you to reuse those processes later.

To start, create an inventory of basic steps to identify duplicates and outliers.

It’s important to start the data curation process with the end in mind.

“Make sure these basics are applied to each data set consistently,” Jones said.

It’s also important to think about at what point you want to clean the data. Some organizations prefer to do it at the point of intake, while others find it works better right before reporting.

Another practice is to curate data with the tools in mind. For example, if your organization uses specific tools, like Tableau, certain data formats can facilitate faster dashboard development.

5. Balance data governance with agility

Organizations need to strike a balance between data governance and business agility.

“I’m seeing businesses shifting away from the Wild West of self-service data wrangling to team-based, enterprise data preparation and analytics solutions that support better search, collaboration and governance of curated data sets,” said Jen Underwood, senior director at DataRobot, an automated machine learning platform.

Proper data curation and governance provides a management framework that can enable availability, usability, integrity and security of data usage in an enterprise. It improves visibility, control of and trust in data, and by ensuring the safety and accuracy of data, it promotes greater confidence in the resulting insights and analytics.

Some practices that can help strike this balance include engaging users, sharing experiences and focusing on the most-used data first. If users have a tool that encourages them to centralize their data securely, they are more likely to follow secure practices.

A centralized platform can also help users identify data, processes and other information that might be relevant to their analytics or machine learning project. Machine learning can be used to identify trends in usage, as well as potential risks.

6. Identify business needs

Data provides value only when its use satisfies a business need. Daniel Mintz, chief data evangelist at Looker, a data modeling platform, recommends starting with one question.

“What does the business need out of these data sets?” he said. “If you don’t ask that upfront, you can end up with just a mess of data sources that no one actually needs.”

It’s important to pull in the business owners and the business subject-matter experts early. These people are your users. Not pulling them in at the start is just as bad as building software without talking to the intended audience.

“Always avoid curating a bunch of data without talking to the intended audience,” Mintz said.

7. Balance analytics users and data stewards

A centralized data governance effort is important. But it’s also a good idea to include the analytics users as part of this process, said Jean-Michel Franco, senior director of data governance product at Talend.

“They need to contribute to the data governance process, as they are the ones that know their data best,” he said.

One strategy is to adopt a Wikipedia-like approach with a central place where data is shared, and where anyone can contribute to the data curation process under well-defined curation rules.

More centralized data stewardship roles can complement these efforts by implementing well-defined data governance processes covering several activities, including monitoring, reconciliation, refining, deduplication, cleansing and aggregation, to help deliver quality data to applications and end users.

8. Plan for problems

Developing a robust data curation process and data modeling strategy requires admins to account for imprecision, ambiguity and changes in the data.

“Spitting out some numbers at the end of a complex data pipeline is not very helpful if you can’t trace the underlying data back to its source to assess its fitness for purpose at every stage,” explained Justin Makeig, director of product management of MarkLogic Corp., an operational database provider.

Confidence in a source of data, for example, is a key aspect of how MarkLogic’s Department of Defense and intelligence customers think about analytics. They need the ability to show their work but, more importantly, they need to update their findings if that confidence changes. This makes it easier to identify when decisions made in the past relied on a data source that is now known to be untrustworthy. It’s only possible to identify the impact of untrustworthy data by keeping all the context around with the data in a queryable state.

The difference between very high, high, medium assurance SSL certificates?


The level of the assurance mostly depends on the certificate validation type, the amount of the information the certificate applicant provides to the Certificate Authority (Comodo, now Sectigo). The deeper the certificate validation process performed by Comodo (now Sectigo) is, the higher assurance is.

Domain Validation (DV) SSL certificates provide low assurance and medium assurance. To issue a DV certificate, a Certificate Authority should verify whether the certificate applier can manage the domain name the certificate is activated for. The certificate validation process is fast enough. Usually, it takes up to 15 minutes. Low assurance certificates provide the same security and encryption level as the other ones and are not limited in any way. They are good for blogs and personal websites. They are the following: PositiveSSL, PositiveSSL Multi-domain, PositiveSSL Wildcard.
Medium assurance certificates are suitable for small, medium-volume business and personal websites. You can check these certificates here: EssentialSSL, EssentialSSL Wildcard.

SSL certificates that provide the high assurance level are Organization Validation (OV) ones. For a certificate issuance, it is necessary to complete the Domain Control Validation (DCV), and the Certificate Authority should verify the legal and physical existence of the company that applies for the certificate. The certificate validation process may take up to 2 business days. High assurance certificates are a good solution for medium and large-volume websites.
The list of high assurance certificates: InstantSSL, InstantSSL Pro, PremiumSSL,Unified Communications,PremiumSSL Wildcard,Multi-Domain SSL.

Extended Validation (EV) SSL certificates provide the very high assurance level. One should submit the documents required by the Certificate Authority to verify the certificate applier. Also, it is required to verify the legal and physical existence of the company and its telephone number to complete the certificate validation. These certificates are best for large-volume e-commerce websites and large organizations. Additionally, the green bar with the company details will be available with EV certificates: EV SSL, EV Multi-Domain SSL.

Which type of SSL certificate to use


Which type of certificate is right for me?

So, you want a single certificate to cover multiple sites. Which type of certificate should you purchase? The type will depend on a number of variable including the number of sites, the number of base domains, what sub domains of the base domains are to be covered and also financial considerations and your company’s IT policies.

Multi-domain SAN SSL certificates

If you want to cover more than one registered base domain on a single certificate, such as and, then your only choice is a multi-domain SAN certificate. We offer several multi-domain SAN certificates both with and without EV features. Each certificate can cover up to 100 sites, from any registered domain name that you own, on the same certificate. Each individual site must be listed as either the Common Name (CN) or a SAN on the certificate.

  • Secure up to 100 sites from any registered base domain on a single certificate.
  • Lower certificate management costs.
  • Add or change SAN names by purchasing additional SANs throughout the life of the certificate.
  • Each Site must be listed separately.
  • Certificate with more than 25 SANs may be difficult to administer.
  • Can get expensive.
  • A single key pair used by more than one server can conflict with a Company’s IT Policies as it presents the potential for a single point of failure affecting multiple servers.

Wildcard SSL Certificates

A Wildcard certificate will cover any sub domain at a single level for a single registered base domain. The “*” in the Common Name (CN) of a wildcard certificate represents the variable. It is the single variable for the certificate.

Example: a Common Name of *

Will secure…

Will not secure… (different TLD) (too many subdomains) (different domain)

DV vs OV Wildcard Certificates

We offer both Domain Validated (DV) wildcard certificates and Organization Validated (OV) wildcard certificates. The OV wildcard certificates include the organization name on the certificate (e.g., Gotham Books, Inc.) and are vetted by both validating the organization is registered and in good standing with the local registration authority and a full time employee of the organization has verified that the organization is indeed purchasing the certificate.

A DV certificate does not include any organization information and simply represents that a party that passed Domain Control Validation purchased the certificate but does not state who that party is. DV Certificates are approved via a simple email or DNS posting while OV certificate also require the organization validation and verification to be completed manually. DV certs are easier to get and can be issued within just minutes.

  • Secure unlimited sites from a single registered base domain at a single sub domain level.
  • Add new sites without having to reissue the cert.
  • Sites do not have to be separately listed.
  • Will only cover sites at the sub domain listed as the variable in the certificate name.
  • Base domain is covered as a SAN only for first level sub domain wildcard certificates.
  • Expensive.
  • A Single DV w/SANs certificate will cover up to 5 sub domains for much less than the cost of a Wildcard and does not limit the SANs to a single sub domain.
  • A single key pair used by more than one server can conflict with a Company’s IT Policies as it presents the potential for a single point of failure affecting multiple servers.

Standard DV SSL w/SANs Certificates

The Standard DV SSL w/SANs Certificate will cover the Common Name plus up to 4 additional subdomains of the same base domain. This is our top selling certificate. The certificate is Domain Validated (DV) meaning it is approved via a simple email or DNS posting and does not carry any organization information.

Tip: If the Common Name on the certificate is for a first level sub domain such as or then the base domain of will be covered as a free SAN and not count toward the 4 additional SANs. SAN names can be from any subdomain level of the base domain and can be changed at any time during the life of the certificate.

  • Secure up to 5 sites from a single registered base domain at any sub domain level.
  • Issued quickly with simple approval process.
  • SANs can be changed throughout the life of the certificate.
  • Inexpensive and easy to get.
  • Will only cover sites that have the same base domain.
  • Does not include organization information.
  • Cannot expand beyond 4 SANs.
  • Two Standard DV w/SANs certificates will cover up to 10 sub domains of the same base domain at any level for less than the cost of a wildcard certificate.
  • A single key pair used by more than one server can conflict with a Company’s IT Policies as it presents the potential for a single point of failure affecting multiple servers.

How IIS Bindings Works


Let’s make sure our servers are directing web traffic properly with some basic IIS Bindings rules and practices.

While IIS is a powerful tool for hosting sites on Microsoft Servers, we will only be going over the “easy” stuff in this article. However, it’s safe to say that before you perform an operation that requires you to edit the web.config file for a site, you may want to check IIS first since it will typically provide a GUI to do the same thing. Without further ado, let’s dive right in.

IIS or ‘Internet Information Services’ is a set of services for servers using Microsoft’s operating system.

Many versions of IIS exist, but if you are working on a server today, you’ll typically be using 6.0, 7.0, 7.5, 8.0, or 8.5. You can run more than one version on a server at a time as well, (such as both 6.0 and 7.0). The differences between 6.0 and the other versions are quite vast, (how they handle code and features), while 7.0, 7.5, 8.0, and 8.5 are similar, (although the later versions offer a few more features). The main purpose for this service is to house, administrate, configure and operate on your sites.

What is an IIS Binding?

A IIS binding is simply a mechanism that “binds” to your site and tells the server how this site can be reached. An IIS Binding is simply a binding that lives in IIS. When we talk about IIS Bindings, we are talking about this part of IIS:

Internet Information Services

As you can see, it’s aptly named.

Why are IIS Bindings Important?

Because we need them to direct the traffic sent to the server to the appropriate site. In other words, DNS will direct traffic to your server, and then bindings take over to get that traffic to the appropriate site by using the sites’ binding.

The primary rule for bindings is that when you have multiple bindings, each must differ in some way. We have three binding options in which to do this with. They are Port, IP Address, and Host Header. When working with multiple sites, no site can have the same Port, IP Address, and Host Header as another; they must differ on at least one of these criteria.

Internet Ports

How do IIS Bindings Work?

So when a request hits the server, the first thing it looks for is Port. The two most common Ports are port 80 (HTTP) and 443 (HTTPS). That being said, it’s not uncommon to have multiple sites using either Port.

The next item that a request considers is IP Address. This can really go either way. If you have plenty of static IP Addresses to use, then it’s best practice to have a unique IP Address for each site. However, you can use the same IP Address for multiple sites if you are paying per IP.

So, what happens when you share the same Port and IP Address? You must differentiate by Host Header, (aka Host name). This name must match the request exactly. So, if you are trying to reach a site by, then you must have as the Host Header. Want to access by Then you must have in the Host Header. Sites may have multiple Host Headers to compensate for CNAMES/Alias’ or sites that are accessed by different names.

*On a side note, when you’re ordering an SSL be careful of the name you choose. It must be accessed by exactly that name (unless it’s a wildcard certificate). You can use IIS Redirects to “force” requests into appropriate bindings, but I will cover that in detail in another post.

Proper Planning Makes IIS Bindings Successful

This is the “very” basic essence of bindings. While seemingly simple, this can become a major pain point when dealing with many sites if proper planning and/or documentation is not utilized. If you cannot get a site to start without it automatically stopping another, then you have a binding conflict that you must address. Proper planning will avoid this issue altogether, so keep these rules in mind when planning your environment.

Dynamic Programming Practice Problems

remember the past

Dynamic Programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions using a memory-based data structure (array, map,etc). Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup. So the next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time. This technique of storing solutions to subproblems instead of recomputing them is called memoization.

Here’s brilliant explanation on concept of Dynamic Programming on Quora — Jonathan Paulson’s answer to How should I explain dynamic programming to a 4-year-old?

Please find below top 50 common data structure problems that can be solved using Dynamic programming –

  1. Longest Common Subsequence | Introduction & LCS Length
  2. Longest Common Subsequence | Finding all LCS — Techie Delight
  3. Longest Common Substring problem — Techie Delight
  4. Longest Palindromic Subsequence using Dynamic Programming
  5. Longest Repeated Subsequence Problem — Techie Delight
  6. Implement Diff Utility — Techie Delight
  7. Shortest Common Supersequence | Introduction & SCS Length
  8. Shortest Common Supersequence | Finding all SCS — Techie Delight
  9. Longest Increasing Subsequence using Dynamic Programming — Techie Delight
  10. Longest Bitonic Subsequence — Techie Delight
  11. Increasing Subsequence with Maximum Sum — Techie Delight
  12. The Levenshtein distance (Edit distance) problem — Techie Delight
  13. Find size of largest square sub-matrix of 1’s present in given binary matrix — Techie Delight
  14. Matrix Chain Multiplication using Dynamic Programming
  15. Find the minimum cost to reach last cell of the matrix from its first cell — Techie Delight
  16. Find longest sequence formed by adjacent numbers in the matrix — Techie Delight
  17. Count number of paths in a matrix with given cost to reach destination cell
  18. 0–1 Knapsack problem — Techie Delight
  19. Maximize the Value of an Expression — Techie Delight
  20. Partition problem | Dynamic Programming Solution — Techie Delight
  21. Subset Sum Problem — Techie Delight
  22. Minimum Sum Partition Problem — Techie Delight
  23. Find all N-digit binary strings without any consecutive 1’s — Techie Delight
  24. Rod Cutting Problem — Techie Delight
  25. Maximum Product Rod Cutting — Techie Delight
  26. Coin change-making problem (unlimited supply of coins) — Techie Delight
  27. Coin Change Problem (Total number of ways to get the denomination of coins) — Techie Delight
  28. Longest Alternating Subsequence Problem — Techie Delight
  29. Count number of times a pattern appears in given string as a subsequence
  30. Collect maximum points in a matrix by satisfying given constraints — Techie Delight
  31. Count total possible combinations of N-digit numbers in a mobile keypad — Techie Delight
  32. Find Optimal Cost to Construct Binary Search Tree — Techie Delight
  33. Word Break Problem | Dynamic Programming — Techie Delight
  34. Word Break Problem | Using Trie Data Structure — Techie Delight
  35. Total possible solutions to linear equation of k variables — Techie Delight
  36. Wildcard Pattern Matching — Techie Delight
  37. Find Probability that a Person is Alive after Taking N steps on an Island
  38. Calculate sum of all elements in a sub-matrix in constant time — Techie Delight
  39. Find Maximum Sum Submatrix in a given matrix — Techie Delight
  40. Find Maximum Sum Submatrix present in a given matrix — Techie Delight
  41. Find maximum sum of subsequence with no adjacent elements — Techie Delight
  42. Maximum Subarray Problem (Kadane’s algorithm) — Techie Delight
  43. Single-Source Shortest Paths — Bellman Ford Algorithm — Techie Delight
  44. All-Pairs Shortest Paths — Floyd Warshall Algorithm — Techie Delight
  45. Pots of Gold Game using Dynamic Programming — Techie Delight
  46. Find minimum cuts needed for palindromic partition of a string
  47. Maximum Length Snake Sequence — Techie Delight
  48. 3-Partition Problem — Techie Delight
  49. Calculate size of the largest plus of 1’s in binary matrix — Techie Delight
  50. Check if given string is interleaving of two other given strings

How to transfer a GoDaddy SSL certificate to Windows IIS 8

Moving an SSL certificate to a new server isn’t always straight forward. Moving one from Godaddy to a new Windows Server proved downright frustrating since their directions result in a disappearing certificate.

The problem is that the generated certificate was created using a certificate signing request (CSR) from a different machine, and the private key is not included in the SSL bundle (with good reason). When you try and import an SSL certificate into IIS using the steps outlined by Godaddy here, Windows Server will give the impression that everything worked just fine and no error will be given. You can even open and view the newly imported certificate in IIS Manager. But this is lies! Silently, Windows rejected the certificate because it did not contain a private key it could validate and you only find out about it when you try to apply the cert to a website and the certificate no longer exists.

To solve this issue we can make use of a handy tool Godaddy provides that lets you re-key a certificate. In this way, we can generate a new ‘Create Certificate Request’ on the new server and rey-key the SSL certificate based on the newly generated private key. It sounds hard, but here’s pictures:

Step 1)

Log into your GoDaddy account, expand the SSL Certificates section, and click the Manage button for the SSL Certificate you want to transfer.

Step 2)

Click on the ‘View Status’ link for the SSL Certificate to transfer

Step 3)

Click on the big ‘Manage’ button. You’ll now be on the Manage Certificate screen

Step 4)

Before we proceed further, we need to generate a new Certificate Signing Request on the Windows Server. Open IIS Manager and click on the server node you want to add the certificate to. Then select the ‘Server Certificates’ item toward the bottom. On the Server Certificates screen, click the ‘Create Certificate Request’ link. Fill out the certificate information and save the file to your desktop. Open the certificate request file in notepad and copy the contents.

Step 5)

Back at the GoDaddy certificate manager, expand the Re-Key Certificate area and paste in your certificate request. Verify the domain name you want to protect is displayed and hit save. GoDaddy will now go through a process of validating your account and re-keying your certificate. This should only take a couple of minutes.

Step 6)

Back at the Server Management Options screen, click the big ‘Download’ button to retrieve your newly keyed certificate. Choose the IIS option from the select box and copy the zip file to your Windows Server.

Step 7)

Unzip the certificate files to your Windows server, then click the ‘Complete Certificate Request’ link back in the IIS Manager Server Certificates area. Choose the certificate you just downloaded from GoDaddy and select the ‘Personal’ store for the certificate and click OK.


Now you should be able to choose the SSL certificate for your website in IIS as you’d expect without it vanishing on you.

Don’t forget to import the intermediate certificated from GoDaddy (those instructions do work) if you don’t have them already or else your new certificate may throw warnings on client browsers.

2 Ways to Deploy Website in IIS


How to Deploy Website in IIS via File copy

After developing a web application, the next important step is to deploy the web application. The web application needs to be deployed so that it can be accessed by other users. The deployment is done to an IIS Web server.

There are various ways to deploy a web application. Let’s look at the first method which is the File copy.

We use the web application created in the earlier sections. Let’s follow the below-mentioned steps to achieve this.

Step 1) Let’s first ensure we have our web application ‘DemoApplication’ open in Visual Studio.

Deploying a website on IIS

Step 2) Open the ‘Demo.aspx’ file and enter the string “Guru 99 ASP.Net.”

Deploying a website on IIS

<!DOCTYPE html>
<html xmlns="http://www.w3.ore/1999/xhtml">
<head runat="server">
	  <form id="form1" runat="server”>
Guru 99 ASP.Net
</form> </body> </html>

Now just run the application in Visual Studio to make sure it works.


Deploying a website on IIS

The text ‘Guru 99 ASP.Net’ is displayed. You should get the above output in the browser.

Step 3) Now it’s time to publish the solution.

  1. Right-click the ‘DemoApplication’ in the Solution Explorer
  2. Choose the ‘Publish’ Option from the context menu.

Deploying a website on IIS

It will open another screen (see step below).

Step 4) In the next step, choose the ‘New Profile’ to create a new Publish profile. The publish profile will have the settings for publishing the web application via File copy.

Deploying a website on IIS

Step 5) In the next screen we have to provide the details of the profile.

  1. Give a name for the profile such as FileCopy
  2. Click the OK button to create the profile

Deploying a website on IIS

Step 6) In this step, we specifically mention that we are going to Publish website via File copy.

  1. Choose the Publish method as File System.
  2. Enter the target location as C:\inetpub\wwwroot – This is the standard file location for the Default Web site in IIS.
  3. Click ‘Next’ button to proceed.

Deploying a website on IIS

Step 7) In the next screen, click the Next button to proceed.

Deploying a website on IIS

Step 8) Click the ‘Publish’ button in the final screen

Deploying a website on IIS

When all of the above steps are executed, you will get the following output in Visual Studio


Deploying a website on IIS

From the output, you will see that the Publish succeeded.

Now just open the browser and go to the URL – http://localhost/Demo.aspx

Deploying a website on IIS

You can see from the output that now when you browse to http://localhost/Demo.aspx , the page appears. It also displays the text ‘Guru 99 ASP.Net’.

How to Publish ASP.NET Website

Another method to deploy the web application is via publishing a website. The key difference in this method is that

  • You have more control over the deployment.
  • You can specify to which Web site you want to deploy your application to.
  • For example, suppose if you had two websites WebSiteA and WebSiteB. If you use the Web publish method, you can publish your application to any website. Also, you don’t need to know the physical path of the Web site.
  • In the FileCopy method, you have to know the physical path of the website.

Let’s use the same Demo Application and see how we can publish using the “website publish method.”

Step 1) In this step,

  1. Right-click the ‘DemoApplication’ in the Solution Explorer
  2. Choose the Publish Option from the context menu.

Deploying a website on IIS

Step 2) On the next screen, select the ‘New Profile’ option to create a new Publish profile. The publish profile will have the settings for publishing the web application via Web Deploy.

Deploying a website on IIS

Step 3) In the next screen we have to provide the details of the profile.

  1. Give a name for the profile such as ‘WebPublish’
  2. Click the ‘OK’ button to create the profile

Deploying a website on IIS

Step 4) In the next screen, you need to give all the details for the publish process

  1. Choose the Publish method as Web Deploy
  2. Select the server as Localhost
  3. Enter the site name as Default Website – Remember that this is the name of the website in IIS
  4. Enter the destination URL as http://localhost
  5. Finally, click the Next button to proceed

Deploying a website on IIS

Step 5) Click the ‘Next’ button on the following screen to continue

Deploying a website on IIS

Step 6) Finally, click the Publish button to publish the Website

Deploying a website on IIS

When all of the above steps are executed, you will get the following output in Visual Studio.


Deploying a website on IIS

From the output, you will see that the Publish succeeded.

Now just open the browser and go to the URL – http://localhost/Demo.aspx

Deploying a website on IIS

You can see from the output that now when you browse to http://localhost/Demo.aspx , the page appears. It also displays the text Guru 99 ASP.Net.


  • After an ASP.Net application is developed, the next step is that it needs to be deployed.
  • In .Net, IIS is the default web server for ASP.Net applications.
  • ASP.Net web applications can be deployed using File copy method.
  • ASP.Net web applications can also be deployed using Web Publish method.