PyTorch for Deep Learning

Source: https://towardsdatascience.com/pytorch-for-deep-learning-a-quick-guide-for-starters-5b60d2dbb564

In 2019, the war for ML frameworks has two main contenders: PyTorch and TensorFlow. There is a growing adoption of PyTorch by researchers and students due to ease of use, while in industry, Tensorflow is currently still the platform of choice.

Some of the key advantages of PyTorch are:

  • Simplicity: It is very pythonic and integrates easily with the rest of the Python ecosystem. It is easy to learn, use, extend, and debug.
  • Great API: PyTorch shines in term of usability due to better designed Object Oriented classes which encapsulate all of the important data choices along with the choice of model architecture. The documentation of PyTorch is also very brilliant and helpful for beginners.
  • Dynamic Graphs: PyTorch implements dynamic computational graphs. Which means that the network can change behavior as it is being run, with little or no overhead. This is extremely helpful for debugging and also for constructing sophisticated models with minimal effort. allowing PyTorch expressions to be automatically differentiated.

There is a growing popularity of PyTorch in research. Below plot showing monthly number of mentions of the word “PyTorch” as a percentage of all mentions among other deep learning frameworks. We can see there is an steep upward trend of PyTorch in arXiv in 2019 reaching almost 50%.

arXiv papers mentioning PyTorch is growing

Dynamic graph generation, tight Python language integration, and a relatively simple API makes PyTorch an excellent platform for research and experimentation.

Installation

PyTorch provides a very clean interface to get the right combination of tools to be installed. Below a snapshot to choose and the corresponding command. Stable represents the most currently tested and supported version of PyTorch. This should be suitable for many users. Preview is available if you want the latest version, not fully tested and supported. You can choose from Anaconda (recommended) and Pip installation packages and supporting various CUDA versions as well.

PyTorch Modules

Now we will discuss key PyTorch Library modules like TensorsAutogradOptimizers and Neural Networks (NN ) which are essential to create and train neural networks.

Tensors

Tensors are the workhorse of PyTorch. We can think of tensors as multi-dimensional arrays. PyTorch has an extensive library of operations on them provided by the torch module. PyTorch Tensors are very close to the very popular NumPy arrays . In fact, PyTorch features seamless interoperability with NumPy. Compared with NumPy arrays, PyTorch tensors have added advantage that both tensors and related operations can run on the CPU or GPU. The second important thing that PyTorch provides allows tensors to keep track of the operations performed on them that helps to compute gradients or derivatives of an output with respect to any of its inputs.

Tensor refers to the generalization of vectors and matrices to an arbitrary number of dimensions. The dimensionality of a tensor coincides with the number of indexes used to refer to scalar values within the tensor. A tensor of order zero (0D tensor) is just a number or a scalar. A tensor of order one (1D tensor) is an array of numbers or a vector. Similarly a 2nd-order tensor (2D)is an array of vectors or a matrix.

Now let us create a tensor in PyTorch.

After importing the torch module, we called a function torch.ones that creates a (2D) tensor of size 9 filled with the values 1.0.

Other ways include using torch.zeros; zero filled tensor, torch.randnfrom random uniform distribution.

Each tensor has an associated type and size. The default tensor type when you use the torch.Tensor constructor is torch.FloatTensor. However, you can convert a tensor to a different type (floatlongdouble, etc.) by specifying it at initialization or later using one of the typecasting methods. There are two ways to specify the initialization type: either by directly calling the constructor of a specific tensor type, such as FloatTensor or LongTensor, or using a special method, torch.tensor(), and providing the dtype.

To find the maximum item in a tensor as well as the index that contains the maximum value. These can be done with the max() and argmax() functions. We can also use item() to extract a standard Python value from a 1D tensor.

Most functions that operate on a tensor and return a tensor create a new tensor to store the result. If you need an in-place function look for a function with an appended underscore (_) e.g torch.transpose_ will do in-place transpose of a tensor.

Converting between tensors and Numpy is very simple using torch.from_numpy & torch.numpy().

Another common operation is reshaping a tensor. This is one of the frequently used operations and very useful too. We can do this with either view() or reshape():

Tensor.reshape() and Tensor.view() though are not the same.

  • Tensor.view() works only on contiguous tensors and will never copy memory. It will raise an error on a non-contiguous tensor. But you can make the tensor contiguous by calling contiguous() and then you can call view().
  • Tensor.reshape() will work on any tensor and can make a clone if it is needed.

PyTorch supports broadcasting similar to NumPy. Broadcasting allows you to perform operations between two tensors. Refer here for the broadcasting semantics.

Tensor in a nutshell: What, How are Where

Three attributes which uniquely define a tensor are:

dtype: What is actually stored in each element of the tensor? This could be floats or integers etc. PyTorch has nine different data types.

layout: How we logically interpret this physical memory. The most common layout is a strided tensor. Strides are a list of integers: the k-th stride represents the jump in the memory necessary to go from one element to the next one in the k-th dimension of the Tensor.

device: Where the tensor’s physical memory is actually stored, e.g., on a CPU, or a GPU. The torch.device contains a device type ('cpu' or 'cuda') and optional device ordinal for the device type.

Autograd

Autograd is automatic differentiation system. What does automatic differentiation do? Given a network, it calculates the gradients automatically. When computing the forwards pass, autograd simultaneously performs the requested computations and builds up a graph representing the function that computes the gradient.

PyTorch tensors can remember where they come from in terms of the operations and parent tensors that originated them, and they can provide the chain of derivatives of such operations with respect to their inputs automatically. This is achieved through requires_gradif set to True.

t= torch.tensor([1.0, 0.0], requires_grad=True)

After calculating the gradient, the value of the derivative is automatically populated as a grad attribute of the tensor. For any composition of functions with any number of tensors with requires_grad= True; PyTorch would compute derivatives throughout the chain of functions and accumulate their values in the grad attribute of those tensors.

Optimizers

Optimizers are used to update weights and biases i.e. the internal parameters of a model to reduce the error. Please refer to my another article for more details.

PyTorch has an torch.optim package with various optimization algorithms like SGD (Stochastic Gradient Descent), Adam, RMSprop etc .

Let us see how we can create one of the provided optimizers SGD or Adam.

Without using optimizers, we would need to manually update the model parameters by something like:

We can use the step() method from our optimizer to take a forward step, instead of manually updating each parameter.

The value of params is updated when step is called. The optimizer looks into params.grad and updates params by subtracting learning_rate times grad from it, exactly as we did in without using optimizer.

torch.optim module helps us to abstract away the specific optimization scheme with just passing a list of params. Since there are multiple optimization schemes to choose from, we just need to choose one for our problem and rest the underlying PyTorch library does the magic for us.

Neural Network

In PyTorch the torch.nn package defines a set of modules which are similar to the layers of a neural network. A module receives input tensors and computes output tensors. The torch.nn package also defines a set of useful loss functions that are commonly used when training neural networks.

Steps of building a neural network are:

  • Neural Network Construction: Create the neural network layers. setting up parameters (weights, biases)
  • Forward Propagation: Calculate the predicted output. Measure error.
  • Back-propagation: After finding the error, we backward propagate our error gradient to update our weight parameters. We do this by taking the derivative of the error function with respect to the parameters of our NN.
  • Iterative Optimization: We want to minimize error as much as possible. We keep updating the parameters iteratively by gradient descent.

Build a Neural Network

Let us follow the above steps and create a simple neural network in PyTorch.

We call our NNNet hereWe’re inheriting from nn.Module. Combined with super().__init__() this creates a class that tracks the architecture and provides a lot of useful methods and attributes.

Our neural network Net has one hidden layer self.hl and one output layer self.ol.

This line creates a module for a linear transformation with 1 inputs and 10 outputs. It also automatically creates the weight and bias tensors. You can access the weight and bias tensors once the network net is created with net.hl.weight and net.hl.bias.

We have defined activation using self.relu = nn.ReLU() .

PyTorch networks created with nn.Module must have a forward() method defined. It takes in a tensor x and passes it through the operations you defined in the __init__ method.

We can see that the input tensor goes through the hidden layer, then activation function (relu), then the output layer.

Here we have to calculate error or loss and backward propagate our error gradient to update our weight parameters.

A loss function takes the (output, target) and computes a value that estimates how far away the output is from the target.There are several different loss functions under the torch.nn package . A simple loss is nn.MSELoss which computes the mean-squared error between the input and the target.

A simple function callloss.backward() propagates the error. Don’t forget to clear the existing gradients though else gradients will be accumulated to existing gradients. After callingloss.backward() have a look at hidden layer bias gradients before and after the backward call.

So after calling the backward(), we see the gradients are calculated for the hidden layer.

We have already seen how optimizer helps us to update the parameters of the model.

Please be careful not to miss the zero_grad() call. If you miss calling it, gradients would get accumulated at every call to backward, and your gradient descent will not converge. Below a recent tweet from Andrej shows the frustration and the time it can take to fix such bugs.

Now with our basic steps (1,2,3) complete, we just need to iteratively train our neural network to find the minimum loss. So we run the training_loop for many epochs until we minimize the loss.

Let us run our neural network to train for input x_t and targety_t.

We call training_loop for 1500 epochs an pass all other arguments like optimizermodelloss_fn inputsand target. After every 300 epochs we print the loss and we can see the loss decreasing after every iteration. Looks like our very basic neural network is learning.

We plot the model output (black crosses) and target data (red circles), the model seems to learn quickly.

So far we discussed the basic or essential elements of PyTorch to get you started. We can see how modular the code we build with each component providing the basic blocks which can be further extended to create a machine learning solution as per our requirements.

Creating machine learning based solutions for real problems involves significant effort into data preparation. However, PyTorch library provides many tools to make data loading easy and more readable like torchvisiontorchtext and torchaudio to work with image, text and audio data respectively.

Training machine learning models is often very hard. A tool that can help in visualizing our model and understanding the training progress is always needed, when we encounter some problems. TensorBoard is one such tool that helps to log events from our model training, including various scalars (e.g. accuracy, loss), images, histograms etc. Since release of PyTorch 1.2.0, TensorBoard is now a PyTorch built-in feature. Please follow this and this tutorials for installation and use of TensorBoard in Pytorch.

Thanks for the read. See you soon with another post 🙂

Refrences:

[1] https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/

[2] https://pytorch.org/

[3] https://www.kdnuggets.com/2018/05/wtf-tensor.html

Tips to improve data curation process

Source: https://searchdatamanagement.techtarget.com/feature/8-tips-to-improve-the-data-curation-process

A data curation and modeling strategy can ensure accuracy and enhance governance. Experts offer eight best practices for curating data. First, start at the source.

The immense benefits of big data are attainable only when organizations can find ways to manage a massive volume of varied data.

“Most Fortune 500 companies are still struggling to manage their data, or what is called data curation,” said Kuldip Pabla, senior vice president of engineering at K4Connect, a technology platform for seniors and individuals living with disabilities.

Data modeling complements the data curation process by creating a framework to guide how data sets can efficiently and accurately be integrated into new analytics applications.

Pabla said he sees data curation as the management of data throughout its lifecycle, from creation or ingestion until it is archived or becomes obsolete and is deleted. During this journey, data passes through various phases of transformation; data curation ensures that the data is securely stored and that it can be reliably and efficiently retrieved.

It’s important to establish a data curation process that ensures accuracy and data governance, provides security, and makes it easier to find and use data sets. Although technology can help, it’s better to start with a solid understanding of your goals rather than focusing on a particular tool.

1. Plan for accuracy at the source

To ensure accuracy, it’s much easier to validate data at the source rather than to assess its accuracy later. You may need to use different practices for data gathered in-house and data from other sources.

One approach to enduring data accuracy is to ask users to validate their own data; another is to use sampling and auditing to estimate accuracy levels.

2. Annotate and label

It’s easier to manage data sets and troubleshoot problems if the data sets are annotated and labeled as part of the data curation process. This can include simple enrichments, like adding the time and location of an event.

However, “while tagging enriches the data, inaccurate metadata will lead to inaccuracies during transformation or processing of data,” Pabla said.

3. Maintain strong security and privacy practices

Large curated data sets can also pose a risk if they are compromised by hackers or insiders. Good security practices include encryption, de-identification and a strong data governance model.

“At the minimum, CIOs and CTOs can use strong encryptions to encrypt a piece of data in flight and at rest, along with [using] a stronger firewall to guard their cloud infrastructure or data centers,” Pabla said.

Enterprises should also consider separating personally identifiable information from the rest of the data. This makes it easier to safely distribute curated data sets to various analytics teams. Hybrid analytics and machine learning models could even be run between a user’s smartphone or set-top box in a way that provides insight while keeping users in control of their data, Pabla said.

Another way to provide stronger security is to create a strong and effective governance model that outlines who has access to what data — especially raw personal data. The fewer human eyes that have access to data, the more secure it is, Pabla said.

4. Look ahead

It’s important to start the data curation process with the end in mind. Managers need to track how analytics and machine learning apps are using data sets and work backward to improve how the data is aggregated, said Josh Jones, manager of analytics at Aspirent, an analytics consulting service. This includes maintaining at least three periods of time for trending data.

It’s also good to build repeatable, transparent processes for how you clean the data. This enables you to reuse those processes later.

To start, create an inventory of basic steps to identify duplicates and outliers.

It’s important to start the data curation process with the end in mind.

“Make sure these basics are applied to each data set consistently,” Jones said.

It’s also important to think about at what point you want to clean the data. Some organizations prefer to do it at the point of intake, while others find it works better right before reporting.

Another practice is to curate data with the tools in mind. For example, if your organization uses specific tools, like Tableau, certain data formats can facilitate faster dashboard development.

5. Balance data governance with agility

Organizations need to strike a balance between data governance and business agility.

“I’m seeing businesses shifting away from the Wild West of self-service data wrangling to team-based, enterprise data preparation and analytics solutions that support better search, collaboration and governance of curated data sets,” said Jen Underwood, senior director at DataRobot, an automated machine learning platform.

Proper data curation and governance provides a management framework that can enable availability, usability, integrity and security of data usage in an enterprise. It improves visibility, control of and trust in data, and by ensuring the safety and accuracy of data, it promotes greater confidence in the resulting insights and analytics.

Some practices that can help strike this balance include engaging users, sharing experiences and focusing on the most-used data first. If users have a tool that encourages them to centralize their data securely, they are more likely to follow secure practices.

A centralized platform can also help users identify data, processes and other information that might be relevant to their analytics or machine learning project. Machine learning can be used to identify trends in usage, as well as potential risks.

6. Identify business needs

Data provides value only when its use satisfies a business need. Daniel Mintz, chief data evangelist at Looker, a data modeling platform, recommends starting with one question.

“What does the business need out of these data sets?” he said. “If you don’t ask that upfront, you can end up with just a mess of data sources that no one actually needs.”

It’s important to pull in the business owners and the business subject-matter experts early. These people are your users. Not pulling them in at the start is just as bad as building software without talking to the intended audience.

“Always avoid curating a bunch of data without talking to the intended audience,” Mintz said.

7. Balance analytics users and data stewards

A centralized data governance effort is important. But it’s also a good idea to include the analytics users as part of this process, said Jean-Michel Franco, senior director of data governance product at Talend.

“They need to contribute to the data governance process, as they are the ones that know their data best,” he said.

One strategy is to adopt a Wikipedia-like approach with a central place where data is shared, and where anyone can contribute to the data curation process under well-defined curation rules.

More centralized data stewardship roles can complement these efforts by implementing well-defined data governance processes covering several activities, including monitoring, reconciliation, refining, deduplication, cleansing and aggregation, to help deliver quality data to applications and end users.

8. Plan for problems

Developing a robust data curation process and data modeling strategy requires admins to account for imprecision, ambiguity and changes in the data.

“Spitting out some numbers at the end of a complex data pipeline is not very helpful if you can’t trace the underlying data back to its source to assess its fitness for purpose at every stage,” explained Justin Makeig, director of product management of MarkLogic Corp., an operational database provider.

Confidence in a source of data, for example, is a key aspect of how MarkLogic’s Department of Defense and intelligence customers think about analytics. They need the ability to show their work but, more importantly, they need to update their findings if that confidence changes. This makes it easier to identify when decisions made in the past relied on a data source that is now known to be untrustworthy. It’s only possible to identify the impact of untrustworthy data by keeping all the context around with the data in a queryable state.

The difference between very high, high, medium assurance SSL certificates?

Source: https://www.namecheap.com/support/knowledgebase/article.aspx/9508/68/what-is-the-difference-between-very-high-high-medium-and-low-assurance-certificates

The level of the assurance mostly depends on the certificate validation type, the amount of the information the certificate applicant provides to the Certificate Authority (Comodo, now Sectigo). The deeper the certificate validation process performed by Comodo (now Sectigo) is, the higher assurance is.

Domain Validation (DV) SSL certificates provide low assurance and medium assurance. To issue a DV certificate, a Certificate Authority should verify whether the certificate applier can manage the domain name the certificate is activated for. The certificate validation process is fast enough. Usually, it takes up to 15 minutes. Low assurance certificates provide the same security and encryption level as the other ones and are not limited in any way. They are good for blogs and personal websites. They are the following: PositiveSSL, PositiveSSL Multi-domain, PositiveSSL Wildcard.
Medium assurance certificates are suitable for small, medium-volume business and personal websites. You can check these certificates here: EssentialSSL, EssentialSSL Wildcard.

SSL certificates that provide the high assurance level are Organization Validation (OV) ones. For a certificate issuance, it is necessary to complete the Domain Control Validation (DCV), and the Certificate Authority should verify the legal and physical existence of the company that applies for the certificate. The certificate validation process may take up to 2 business days. High assurance certificates are a good solution for medium and large-volume websites.
The list of high assurance certificates: InstantSSL, InstantSSL Pro, PremiumSSL,Unified Communications,PremiumSSL Wildcard,Multi-Domain SSL.

Extended Validation (EV) SSL certificates provide the very high assurance level. One should submit the documents required by the Certificate Authority to verify the certificate applier. Also, it is required to verify the legal and physical existence of the company and its telephone number to complete the certificate validation. These certificates are best for large-volume e-commerce websites and large organizations. Additionally, the green bar with the company details will be available with EV certificates: EV SSL, EV Multi-Domain SSL.

Which type of SSL certificate to use

Source: https://www.geocerts.com/blog/wildcard-multi-domain-san-or-standard-dv-ssl-wsans-certs

Which type of certificate is right for me?

So, you want a single certificate to cover multiple sites. Which type of certificate should you purchase? The type will depend on a number of variable including the number of sites, the number of base domains, what sub domains of the base domains are to be covered and also financial considerations and your company’s IT policies.

Multi-domain SAN SSL certificates

If you want to cover more than one registered base domain on a single certificate, such as yahoo.com and microsoft.com, then your only choice is a multi-domain SAN certificate. We offer several multi-domain SAN certificates both with and without EV features. Each certificate can cover up to 100 sites, from any registered domain name that you own, on the same certificate. Each individual site must be listed as either the Common Name (CN) or a SAN on the certificate.

Pros
  • Secure up to 100 sites from any registered base domain on a single certificate.
  • Lower certificate management costs.
  • Add or change SAN names by purchasing additional SANs throughout the life of the certificate.
Cons
  • Each Site must be listed separately.
  • Certificate with more than 25 SANs may be difficult to administer.
  • Can get expensive.
Considerations
  • A single key pair used by more than one server can conflict with a Company’s IT Policies as it presents the potential for a single point of failure affecting multiple servers.

Wildcard SSL Certificates

A Wildcard certificate will cover any sub domain at a single level for a single registered base domain. The “*” in the Common Name (CN) of a wildcard certificate represents the variable. It is the single variable for the certificate.

Example: a Common Name of *.hawaii.com

Will secure…

hawaii.com
http://www.hawaii.com
maui.hawaii.com
oahu.hawaii.com
blog.hawaii.com
http://www.hawaii.com
big-island.hawaii.com

Will not secure…

maui.hawaii.net (different TLD)
big.island.hawaii.com (too many subdomains)
aloha.visit-hawaii.com (different domain)

DV vs OV Wildcard Certificates

We offer both Domain Validated (DV) wildcard certificates and Organization Validated (OV) wildcard certificates. The OV wildcard certificates include the organization name on the certificate (e.g., Gotham Books, Inc.) and are vetted by both validating the organization is registered and in good standing with the local registration authority and a full time employee of the organization has verified that the organization is indeed purchasing the certificate.

A DV certificate does not include any organization information and simply represents that a party that passed Domain Control Validation purchased the certificate but does not state who that party is. DV Certificates are approved via a simple email or DNS posting while OV certificate also require the organization validation and verification to be completed manually. DV certs are easier to get and can be issued within just minutes.

Pros
  • Secure unlimited sites from a single registered base domain at a single sub domain level.
  • Add new sites without having to reissue the cert.
  • Sites do not have to be separately listed.
Cons
  • Will only cover sites at the sub domain listed as the variable in the certificate name.
  • Base domain is covered as a SAN only for first level sub domain wildcard certificates.
  • Expensive.
Considerations
  • A Single DV w/SANs certificate will cover up to 5 sub domains for much less than the cost of a Wildcard and does not limit the SANs to a single sub domain.
  • A single key pair used by more than one server can conflict with a Company’s IT Policies as it presents the potential for a single point of failure affecting multiple servers.

Standard DV SSL w/SANs Certificates

The Standard DV SSL w/SANs Certificate will cover the Common Name plus up to 4 additional subdomains of the same base domain. This is our top selling certificate. The certificate is Domain Validated (DV) meaning it is approved via a simple email or DNS posting and does not carry any organization information.

Tip: If the Common Name on the certificate is for a first level sub domain such as http://www.domain.com or online.domain.com then the base domain of domain.com will be covered as a free SAN and not count toward the 4 additional SANs. SAN names can be from any subdomain level of the base domain and can be changed at any time during the life of the certificate.

Pros
  • Secure up to 5 sites from a single registered base domain at any sub domain level.
  • Issued quickly with simple approval process.
  • SANs can be changed throughout the life of the certificate.
  • Inexpensive and easy to get.
Cons
  • Will only cover sites that have the same base domain.
  • Does not include organization information.
  • Cannot expand beyond 4 SANs.
Considerations
  • Two Standard DV w/SANs certificates will cover up to 10 sub domains of the same base domain at any level for less than the cost of a wildcard certificate.
  • A single key pair used by more than one server can conflict with a Company’s IT Policies as it presents the potential for a single point of failure affecting multiple servers.

How IIS Bindings Works

Source: https://sharepoint.fpweb.net/sharepoint-blog/understanding-iis-bindings-rules-and-practices/

Let’s make sure our servers are directing web traffic properly with some basic IIS Bindings rules and practices.

While IIS is a powerful tool for hosting sites on Microsoft Servers, we will only be going over the “easy” stuff in this article. However, it’s safe to say that before you perform an operation that requires you to edit the web.config file for a site, you may want to check IIS first since it will typically provide a GUI to do the same thing. Without further ado, let’s dive right in.

IIS or ‘Internet Information Services’ is a set of services for servers using Microsoft’s operating system.

Many versions of IIS exist, but if you are working on a server today, you’ll typically be using 6.0, 7.0, 7.5, 8.0, or 8.5. You can run more than one version on a server at a time as well, (such as both 6.0 and 7.0). The differences between 6.0 and the other versions are quite vast, (how they handle code and features), while 7.0, 7.5, 8.0, and 8.5 are similar, (although the later versions offer a few more features). The main purpose for this service is to house, administrate, configure and operate on your sites.

What is an IIS Binding?

A IIS binding is simply a mechanism that “binds” to your site and tells the server how this site can be reached. An IIS Binding is simply a binding that lives in IIS. When we talk about IIS Bindings, we are talking about this part of IIS:

Internet Information Services

As you can see, it’s aptly named.

Why are IIS Bindings Important?

Because we need them to direct the traffic sent to the server to the appropriate site. In other words, DNS will direct traffic to your server, and then bindings take over to get that traffic to the appropriate site by using the sites’ binding.

The primary rule for bindings is that when you have multiple bindings, each must differ in some way. We have three binding options in which to do this with. They are Port, IP Address, and Host Header. When working with multiple sites, no site can have the same Port, IP Address, and Host Header as another; they must differ on at least one of these criteria.

Internet Ports

How do IIS Bindings Work?

So when a request hits the server, the first thing it looks for is Port. The two most common Ports are port 80 (HTTP) and 443 (HTTPS). That being said, it’s not uncommon to have multiple sites using either Port.

The next item that a request considers is IP Address. This can really go either way. If you have plenty of static IP Addresses to use, then it’s best practice to have a unique IP Address for each site. However, you can use the same IP Address for multiple sites if you are paying per IP.

So, what happens when you share the same Port and IP Address? You must differentiate by Host Header, (aka Host name). This name must match the request exactly. So, if you are trying to reach a site by www.test.com, then you must have www.test.com as the Host Header. Want to access by Test.com? Then you must have Test.com in the Host Header. Sites may have multiple Host Headers to compensate for CNAMES/Alias’ or sites that are accessed by different names.

*On a side note, when you’re ordering an SSL be careful of the name you choose. It must be accessed by exactly that name (unless it’s a wildcard certificate). You can use IIS Redirects to “force” requests into appropriate bindings, but I will cover that in detail in another post.

Proper Planning Makes IIS Bindings Successful

This is the “very” basic essence of bindings. While seemingly simple, this can become a major pain point when dealing with many sites if proper planning and/or documentation is not utilized. If you cannot get a site to start without it automatically stopping another, then you have a binding conflict that you must address. Proper planning will avoid this issue altogether, so keep these rules in mind when planning your environment.

Dynamic Programming Practice Problems

remember the past
source: https://blog.usejournal.com/top-50-dynamic-programming-practice-problems-4208fed71aa3

Dynamic Programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions using a memory-based data structure (array, map,etc). Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup. So the next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time. This technique of storing solutions to subproblems instead of recomputing them is called memoization.

Here’s brilliant explanation on concept of Dynamic Programming on Quora — Jonathan Paulson’s answer to How should I explain dynamic programming to a 4-year-old?

Please find below top 50 common data structure problems that can be solved using Dynamic programming –

  1. Longest Common Subsequence | Introduction & LCS Length
  2. Longest Common Subsequence | Finding all LCS — Techie Delight
  3. Longest Common Substring problem — Techie Delight
  4. Longest Palindromic Subsequence using Dynamic Programming
  5. Longest Repeated Subsequence Problem — Techie Delight
  6. Implement Diff Utility — Techie Delight
  7. Shortest Common Supersequence | Introduction & SCS Length
  8. Shortest Common Supersequence | Finding all SCS — Techie Delight
  9. Longest Increasing Subsequence using Dynamic Programming — Techie Delight
  10. Longest Bitonic Subsequence — Techie Delight
  11. Increasing Subsequence with Maximum Sum — Techie Delight
  12. The Levenshtein distance (Edit distance) problem — Techie Delight
  13. Find size of largest square sub-matrix of 1’s present in given binary matrix — Techie Delight
  14. Matrix Chain Multiplication using Dynamic Programming
  15. Find the minimum cost to reach last cell of the matrix from its first cell — Techie Delight
  16. Find longest sequence formed by adjacent numbers in the matrix — Techie Delight
  17. Count number of paths in a matrix with given cost to reach destination cell
  18. 0–1 Knapsack problem — Techie Delight
  19. Maximize the Value of an Expression — Techie Delight
  20. Partition problem | Dynamic Programming Solution — Techie Delight
  21. Subset Sum Problem — Techie Delight
  22. Minimum Sum Partition Problem — Techie Delight
  23. Find all N-digit binary strings without any consecutive 1’s — Techie Delight
  24. Rod Cutting Problem — Techie Delight
  25. Maximum Product Rod Cutting — Techie Delight
  26. Coin change-making problem (unlimited supply of coins) — Techie Delight
  27. Coin Change Problem (Total number of ways to get the denomination of coins) — Techie Delight
  28. Longest Alternating Subsequence Problem — Techie Delight
  29. Count number of times a pattern appears in given string as a subsequence
  30. Collect maximum points in a matrix by satisfying given constraints — Techie Delight
  31. Count total possible combinations of N-digit numbers in a mobile keypad — Techie Delight
  32. Find Optimal Cost to Construct Binary Search Tree — Techie Delight
  33. Word Break Problem | Dynamic Programming — Techie Delight
  34. Word Break Problem | Using Trie Data Structure — Techie Delight
  35. Total possible solutions to linear equation of k variables — Techie Delight
  36. Wildcard Pattern Matching — Techie Delight
  37. Find Probability that a Person is Alive after Taking N steps on an Island
  38. Calculate sum of all elements in a sub-matrix in constant time — Techie Delight
  39. Find Maximum Sum Submatrix in a given matrix — Techie Delight
  40. Find Maximum Sum Submatrix present in a given matrix — Techie Delight
  41. Find maximum sum of subsequence with no adjacent elements — Techie Delight
  42. Maximum Subarray Problem (Kadane’s algorithm) — Techie Delight
  43. Single-Source Shortest Paths — Bellman Ford Algorithm — Techie Delight
  44. All-Pairs Shortest Paths — Floyd Warshall Algorithm — Techie Delight
  45. Pots of Gold Game using Dynamic Programming — Techie Delight
  46. Find minimum cuts needed for palindromic partition of a string
  47. Maximum Length Snake Sequence — Techie Delight
  48. 3-Partition Problem — Techie Delight
  49. Calculate size of the largest plus of 1’s in binary matrix — Techie Delight
  50. Check if given string is interleaving of two other given strings

How to transfer a GoDaddy SSL certificate to Windows IIS 8

https://www.itworld.com/article/2931768/how-to-transfer-a-godaddy-ssl-certificate-to-windows-iis-8.html

Moving an SSL certificate to a new server isn’t always straight forward. Moving one from Godaddy to a new Windows Server proved downright frustrating since their directions result in a disappearing certificate.

The problem is that the generated certificate was created using a certificate signing request (CSR) from a different machine, and the private key is not included in the SSL bundle (with good reason). When you try and import an SSL certificate into IIS using the steps outlined by Godaddy here, Windows Server will give the impression that everything worked just fine and no error will be given. You can even open and view the newly imported certificate in IIS Manager. But this is lies! Silently, Windows rejected the certificate because it did not contain a private key it could validate and you only find out about it when you try to apply the cert to a website and the certificate no longer exists.

To solve this issue we can make use of a handy tool Godaddy provides that lets you re-key a certificate. In this way, we can generate a new ‘Create Certificate Request’ on the new server and rey-key the SSL certificate based on the newly generated private key. It sounds hard, but here’s pictures:

Step 1)

Log into your GoDaddy account, expand the SSL Certificates section, and click the Manage button for the SSL Certificate you want to transfer.

Step 2)

Click on the ‘View Status’ link for the SSL Certificate to transfer

Step 3)

Click on the big ‘Manage’ button. You’ll now be on the Manage Certificate screen

Step 4)

Before we proceed further, we need to generate a new Certificate Signing Request on the Windows Server. Open IIS Manager and click on the server node you want to add the certificate to. Then select the ‘Server Certificates’ item toward the bottom. On the Server Certificates screen, click the ‘Create Certificate Request’ link. Fill out the certificate information and save the file to your desktop. Open the certificate request file in notepad and copy the contents.

Step 5)

Back at the GoDaddy certificate manager, expand the Re-Key Certificate area and paste in your certificate request. Verify the domain name you want to protect is displayed and hit save. GoDaddy will now go through a process of validating your account and re-keying your certificate. This should only take a couple of minutes.

Step 6)

Back at the Server Management Options screen, click the big ‘Download’ button to retrieve your newly keyed certificate. Choose the IIS option from the select box and copy the zip file to your Windows Server.

Step 7)

Unzip the certificate files to your Windows server, then click the ‘Complete Certificate Request’ link back in the IIS Manager Server Certificates area. Choose the certificate you just downloaded from GoDaddy and select the ‘Personal’ store for the certificate and click OK.

Finished

Now you should be able to choose the SSL certificate for your website in IIS as you’d expect without it vanishing on you.

Don’t forget to import the intermediate certificated from GoDaddy (those instructions do work) if you don’t have them already or else your new certificate may throw warnings on client browsers.