2 Ways to Deploy Website in IIS

Source: https://www.guru99.com/deploying-website-iis.html

How to Deploy Website in IIS via File copy

After developing a web application, the next important step is to deploy the web application. The web application needs to be deployed so that it can be accessed by other users. The deployment is done to an IIS Web server.

There are various ways to deploy a web application. Let’s look at the first method which is the File copy.

We use the web application created in the earlier sections. Let’s follow the below-mentioned steps to achieve this.

Step 1) Let’s first ensure we have our web application ‘DemoApplication’ open in Visual Studio.

Deploying a website on IIS

Step 2) Open the ‘Demo.aspx’ file and enter the string “Guru 99 ASP.Net.”

Deploying a website on IIS

<!DOCTYPE html>
<html xmlns="http://www.w3.ore/1999/xhtml">
<head runat="server">
	  <form id="form1" runat="server”>
Guru 99 ASP.Net
</form> </body> </html>

Now just run the application in Visual Studio to make sure it works.


Deploying a website on IIS

The text ‘Guru 99 ASP.Net’ is displayed. You should get the above output in the browser.

Step 3) Now it’s time to publish the solution.

  1. Right-click the ‘DemoApplication’ in the Solution Explorer
  2. Choose the ‘Publish’ Option from the context menu.

Deploying a website on IIS

It will open another screen (see step below).

Step 4) In the next step, choose the ‘New Profile’ to create a new Publish profile. The publish profile will have the settings for publishing the web application via File copy.

Deploying a website on IIS

Step 5) In the next screen we have to provide the details of the profile.

  1. Give a name for the profile such as FileCopy
  2. Click the OK button to create the profile

Deploying a website on IIS

Step 6) In this step, we specifically mention that we are going to Publish website via File copy.

  1. Choose the Publish method as File System.
  2. Enter the target location as C:\inetpub\wwwroot – This is the standard file location for the Default Web site in IIS.
  3. Click ‘Next’ button to proceed.

Deploying a website on IIS

Step 7) In the next screen, click the Next button to proceed.

Deploying a website on IIS

Step 8) Click the ‘Publish’ button in the final screen

Deploying a website on IIS

When all of the above steps are executed, you will get the following output in Visual Studio


Deploying a website on IIS

From the output, you will see that the Publish succeeded.

Now just open the browser and go to the URL – http://localhost/Demo.aspx

Deploying a website on IIS

You can see from the output that now when you browse to http://localhost/Demo.aspx , the page appears. It also displays the text ‘Guru 99 ASP.Net’.

How to Publish ASP.NET Website

Another method to deploy the web application is via publishing a website. The key difference in this method is that

  • You have more control over the deployment.
  • You can specify to which Web site you want to deploy your application to.
  • For example, suppose if you had two websites WebSiteA and WebSiteB. If you use the Web publish method, you can publish your application to any website. Also, you don’t need to know the physical path of the Web site.
  • In the FileCopy method, you have to know the physical path of the website.

Let’s use the same Demo Application and see how we can publish using the “website publish method.”

Step 1) In this step,

  1. Right-click the ‘DemoApplication’ in the Solution Explorer
  2. Choose the Publish Option from the context menu.

Deploying a website on IIS

Step 2) On the next screen, select the ‘New Profile’ option to create a new Publish profile. The publish profile will have the settings for publishing the web application via Web Deploy.

Deploying a website on IIS

Step 3) In the next screen we have to provide the details of the profile.

  1. Give a name for the profile such as ‘WebPublish’
  2. Click the ‘OK’ button to create the profile

Deploying a website on IIS

Step 4) In the next screen, you need to give all the details for the publish process

  1. Choose the Publish method as Web Deploy
  2. Select the server as Localhost
  3. Enter the site name as Default Website – Remember that this is the name of the website in IIS
  4. Enter the destination URL as http://localhost
  5. Finally, click the Next button to proceed

Deploying a website on IIS

Step 5) Click the ‘Next’ button on the following screen to continue

Deploying a website on IIS

Step 6) Finally, click the Publish button to publish the Website

Deploying a website on IIS

When all of the above steps are executed, you will get the following output in Visual Studio.


Deploying a website on IIS

From the output, you will see that the Publish succeeded.

Now just open the browser and go to the URL – http://localhost/Demo.aspx

Deploying a website on IIS

You can see from the output that now when you browse to http://localhost/Demo.aspx , the page appears. It also displays the text Guru 99 ASP.Net.


  • After an ASP.Net application is developed, the next step is that it needs to be deployed.
  • In .Net, IIS is the default web server for ASP.Net applications.
  • ASP.Net web applications can be deployed using File copy method.
  • ASP.Net web applications can also be deployed using Web Publish method.

Data visualization tools

Whether you’re working with a large distributed team or overseeing a small group of developers at a startup, project management is a juggling act by definition. The job of project management is all about keeping track of progress, resource allocation and deliverables.

This juggling act can be a challenge even for seasoned project managers – quite simply, there are just so many things to track.

That’s where project management tools come into play. Basic project management can be performed with nothing more than email and a spreadsheet, but the task is greatly improved with software that both tracks all aspects of a project and visually represents how the project is evolving. The visual component is especially important because it helps managers see at a glance how the project is moving forward.

There are many ways that a list of top data visualization tools for project management might be selected: purpose-built add-ons for existing project management software, all-in-one solutions that both track and display project status, and enterprise-grade data visualization tools.

For most project managers, simple all-in-one solutions represent the sweet spot of visualization and ease of use. So we’ll focus on those tools that are both visual and complete solutions unto themselves.


1. Quire

All you need and nothing more

If you need to manage and visualize projects but don’t require advanced functionality that is unnecessary and clutters your project management, Quiremight be your tool of choice.

Quire works by letting you map out tasks and thoughts in a simple to-do list format that can easily be rearranged and assigned to team members. Once tasks have been defined in the app, you can visually organize and assign these tasks using a Kanban board built into the system. The software makes it easy to flip between task lists and the Kanban board as needed.

The beauty of Quire is that it also offers a number of visual representations of your task list that you can choose from, including pie charts, project summaries and graphs. All the basic visual representations are there except for Gantt charts, the one strike against this platform. Overall, though, Quire delivers a streamlined project management interface that nicely strikes a balance between simplicity and power.


2. Casual

Best for flowchart wizards

There are many ways in which you can visually organize your project. If your preferred style is flowchart-based organization, you’re going to love Casual.

Instead of Kanban boards or Gantt charts, Casual is built exclusively around a single flowchart interface where you organize projects by drawing lines between tasks, and assign team members to each step in the flowchart. Each team member gets a task list based on the flowchart, but essentially project managers that use Casual track progress through the single-pane flowchart.

Casual is easily the best solution if you prefer the flowchart format and have a project that can be represented in that format. If you want a range of charts and other ways to organize or manage a large team, however, Casual is not for you.


3. Asana

Traditional and solid

Asana is one of the web-based project management leaders, and with good reason. You get most of the project management tools you expect with the platform, including task lists, Kanban boards, calendar format, conversations and Gantt charts.

You manage projects by entering tasks on a task list, with manual or auto assignment to team members. A Kanban board makes organization of these tasks easy, and a timeline (read: Gantt chart) visually shows the progress of the project.

While task management is quite visual, and most project managers will feel at home on the platform, Asana lacks a chart dashboard for tracking progress metrics in other visual representations. So this is visual management, but it isn’t strong on visual data representation.


4. Wrike

Dashboard project management

If Gantt charts and dashboards are your thing, Wrike should be on your project management shortlist.

Wrike makes it easy to create tasks and workflows, and then manage these tasks visually in a Gantt chart or calendar format. One feature that’s great about Wrike is that you can visually create custom workflows for your project.

Data visualization is also a strong point for Wrike, which lets you set up custom dashboard items that visually show the progress of key project metrics. In one quick view, you can see the status of each area of the project.

Furthermore, Wrike is highly scalable, with many integrations to other platforms and support for large teams.


5. Targetprocess

Clunky but full-featured

If you want everything in your project management software, including a host of data representation options, Targetprocess is for you.

Taking a dashboard approach but also working in Kanban and Gantt chart formats, Targetprocess basically gives your project the full suite of features. Complex task lists can be configured and rearranged, and it comes with a multitude of data representation views that can be added to the dashboard for quick progress assessment.

Targetprocess is geared towards agile software development projects, and it can be overwhelming in its feature set and clunky in its interface, but this solution visually represents data in more ways than rivals, and does everything a software project manager could want.

Performance of browser rendering engine

Source: https://blog.sessionstack.com/how-javascript-works-the-rendering-engine-and-tips-to-optimize-its-performance-7b95553baeda

So far, in our previous blog posts of the “How JavaScript works” series we’ve been focusing on JavaScript as a language, its features, how it gets executed in the browser, how to optimize it, etc.

When you’re building web apps, however, you don’t just write isolated JavaScript code that runs on its own. The JavaScript you write is interacting with the environment. Understanding this environment, how it works and what it is composed of will allow you to build better apps and be well-prepared for potential issues that might arise once your apps are released into the wild.

So, let’s see what the browser main components are:

  • User interface: this includes the address bar, the back and forward buttons, bookmarking menu, etc. In essence, this is every part of the browser display except for the window where you see the web page itself.
  • Browser engine: it handles the interactions between the user interface and the rendering engine
  • Rendering engine: it’s responsible for displaying the web page. The rendering engine parses the HTML and the CSS and displays the parsed content on the screen.
  • Networking: these are network calls such as XHR requests, made by using different implementations for the different platforms, which are behind a platform-independent interface. We talked about the networking layer in more detail in a previous post of this series.
  • UI backend: it’s used for drawing the core widgets such as checkboxes and windows. This backend exposes a generic interface that is not platform-specific. It uses operating system UI methods underneath.
  • JavaScript engine: We’ve covered this in great detail in a previous postfrom the series. Basically, this is where the JavaScript gets executed.
  • Data persistence: your app might need to store all data locally. The supported types of storage mechanisms include localStorageindexDBWebSQL and FileSystem.

In this post, we’re going to focus on the rendering engine, since it’s handling the parsing and the visualization of the HTML and the CSS, which is something that most JavaScript apps are constantly interacting with.

Overview of the rendering engine

The main responsibility of the rendering engine is to display the requested page on the browser screen.

Rendering engines can display HTML and XML documents and images. If you’re using additional plugins, the engines can also display different types of documents such as PDF.

Rendering engines

Similar to the JavaScript engines, different browsers use different rendering engines as well. These are some of the popular ones:

  • Gecko — Firefox
  • WebKit — Safari
  • Blink — Chrome, Opera (from version 15 onwards)

The process of rendering

The rendering engine receives the contents of the requested document from the networking layer.

Constructing the DOM tree

The first step of the rendering engine is parsing the HTML document and converting the parsed elements to actual DOM nodes in a DOM tree.

Imagine you have the following textual input:

    <meta charset="UTF-8">
    <link rel="stylesheet" type="text/css" href="theme.css">
    <p> Hello, <span> friend! </span> </p>
Smiley face
</body> </html>

The DOM tree for this HTML will look like this:

Basically, each element is represented as the parent node to all of the elements, which are directly contained inside of it. And this is applied recursively.

Constructing the CSSOM tree

CSSOM refers to the CSS Object Model. While the browser was constructing the DOM of the page, it encountered a link tag in the head section which was referencing the external theme.css CSS style sheet. Anticipating that it might need that resource to render the page, it immediately dispatched a request for it. Let’s imagine that the theme.css file has the following contents:

body { 
  font-size: 16px;

p { 
  font-weight: bold; 

span { 
  color: red; 

p span { 
  display: none; 

img { 
  float: right; 

As with the HTML, the engine needs to convert the CSS into something that the browser can work with — the CSSOM. Here is how the CSSOM tree will look like:

Do you wonder why does the CSSOM have a tree structure? When computing the final set of styles for any object on the page, the browser starts with the most general rule applicable to that node (for example, if it is a child of a body element, then all body styles apply) and then recursively refines the computed styles by applying more specific rules.

Let’s work with the specific example that we gave. Any text contained within a span tag that is placed within the body element, has a font size of 16 pixels and has a red color. Those styles are inherited from the body element. If a span element is a child of a p element, then its contents are not displayed due to the more specific styles that are being applied to it.

Also, note that the above tree is not the complete CSSOM tree and only shows the styles we decided to override in our style sheet. Every browser provides a default set of styles also known as “user agent styles” — that’s what we see when we don’t explicitly provide any. Our styles simply override these defaults.

Constructing the render tree

The visual instructions in the HTML, combined with the styling data from the CSSOM tree, are being used to create a render tree.

What is a render tree you may ask? This is a tree of the visual elements constructed in the order in which they will be displayed on the screen. It is the visual representation of the HTML along with the corresponding CSS. The purpose of this tree is to enable painting the contents in their correct order.

Each node in the render tree is known as a renderer or a render object in Webkit.

This is how the renderer tree of the above DOM and CSSOM trees will look like:

To construct the render tree, the browser does roughly the following:

  • Starting at the root of the DOM tree, it traverses each visible node. Some nodes are not visible (for example, script tags, meta tags, and so on), and are omitted since they are not reflected in the rendered output. Some nodes are hidden via CSS and are also omitted from the render tree. For example, the span node — in the example above it’s not present in the render tree because we have an explicit rule that sets the display: noneproperty on it.
  • For each visible node, the browser finds the appropriate matching CSSOM rules and applies them.
  • It emits visible nodes with content and their computed styles

You can take a look at the RenderObject’s source code (in WebKit) here: https://github.com/WebKit/webkit/blob/fde57e46b1f8d7dde4b2006aaf7ebe5a09a6984b/Source/WebCore/rendering/RenderObject.h

Let’s just look at some of the core things for this class:

class RenderObject : public CachedImageClient {
  // Repaint the entire object.  Called when, e.g., the color of a border changes, or when a border
  // style changes.
  Node* node() const { ... }
  RenderStyle* style;  // the computed style
  const RenderStyle& style() const;

Each renderer represents a rectangular area usually corresponding to a node’s CSS box. It includes geometric info such as width, height, and position.

Layout of the render tree

When the renderer is created and added to the tree, it does not have a position and size. Calculating these values is called layout.

HTML uses a flow-based layout model, meaning that most of the time it can compute the geometry in a single pass. The coordinate system is relative to the root renderer. Top and left coordinates are used.

Layout is a recursive process — it begins at the root renderer, which corresponds to the <html> element of the HTML document. Layout continues recursively through a part or the entire renderer hierarchy, computing geometric info for each renderer that requires it.

The position of the root renderer is 0,0 and its dimensions have the size of the visible part of the browser window (a.k.a. the viewport).

Starting the layout process means giving each node the exact coordinates where it should appear on the screen.

Painting the render tree

In this stage, the renderer tree is traversed and the renderer’s paint()method is called to display the content on the screen.

Painting can be global or incremental (similar to layout):

  • Global — the entire tree gets repainted.
  • Incremental — only some of the renderers change in a way that does not affect the entire tree. The renderer invalidates its rectangle on the screen. This causes the OS to see it as a region that needs repainting and to generate a paint event. The OS does it in a smart way by merging several regions into one.

In general, it’s important to understand that painting is a gradual process. For better UX, the rendering engine will try to display the contents on the screen as soon as possible. It will not wait until all the HTML is parsed to start building and laying out the render tree. Parts of the content will be parsed and displayed, while the process continues with the rest of the content items that keep coming from the network.

Order of processing scripts and style sheets

Scripts are parsed and executed immediately when the parser reaches a <script> tag. The parsing of the document halts until the script has been executed. This means that the process is synchronous.

If the script is external then it first has to be fetched from the network (also synchronously). All the parsing stops until the fetch completes.

HTML5 adds an option to mark the script as asynchronous so that it gets parsed and executed by a different thread.

Optimizing the rendering performance

If you’d like to optimize your app, there are five major areas that you need to focus on. These are the areas over which you have control:

  1. JavaScript — in previous posts we covered the topic of writing optimized code that doesn’t block the UI, is memory efficient, etc. When it comes to rendering, we need to think about the way your JavaScript code will interact with the DOM elements on the page. JavaScript can create lots of changes in the UI, especially in SPAs.
  2. Style calculations — this is the process of determining which CSS rule applies to which element based on matching selectors. Once the rules are defined, they are applied and the final styles for each element are calculated.
  3. Layout — once the browser knows which rules apply to an element, it can begin to calculate how much space the latter takes up and where it is located on the browser screen. The web’s layout model defines that one element can affect others. For example, the width of the <body> can affect the width of its children and so on. This all means that the layout process is computationally intensive. The drawing is done in multiple layers.
  4. Paint — this is where the actual pixels are being filled. The process includes drawing out text, colors, images, borders, shadows, etc. — every visual part of each element.
  5. Compositing — since the page parts were drawn into potentially multiple layers they need to be drawn onto the screen in the correct order so that the page renders properly. This is very important, especially for overlapping elements.

Optimizing your JavaScript

JavaScript often triggers visual changes in the browser. All the more so when building an SPA.

Here are a few tips on which parts of your JavaScript you can optimize to improve rendering:

  • Avoid setTimeout or setInterval for visual updates. These will invoke the callback at some point in the frame, possible right at the end. What we want to do is trigger the visual change right at the start of the frame not to miss it.
  • Move long-running JavaScript computations to Web Workers as we have previously discussed.
  • Use micro-tasks to introduce DOM changes over several frames. This is in case the tasks need access to the DOM, which is not accessible by Web Workers. This basically means that you’d break up a big task into smaller ones and run them inside requestAnimationFrame , setTimeoutsetInterval depending on the nature of the task.

Optimize your CSS

Modifying the DOM through adding and removing elements, changing attributes, etc. will make the browser recalculate element styles and, in many cases, the layout of the entire page or at least parts of it.

To optimize the rendering, consider the following:

  • Reduce the complexity of your selectors. Selector complexity can take more than 50% of the time needed to calculate the styles for an element, compared to the rest of the work which is constructing the style itself.
  • Reduce the number of elements on which style calculation must happen. In essence, make style changes to a few elements directly rather than invalidating the page as a whole.

Optimize the layout

Layout re-calculations can be very heavy for the browser. Consider the following optimizations:

  • Reduce the number of layouts whenever possible. When you change styles the browser checks to see if any of the changes require the layout to be re-calculated. Changes to properties such as width, height, left, top, and in general, properties related to geometry, require layout. So, avoid changing them as much as possible.
  • Use flexbox over older layout models whenever possible. It works faster and can create a huge performance advantage for your app.
  • Avoid forced synchronous layouts. The thing to keep in mind is that while JavaScript runs, all the old layout values from the previous frame are known and available for you to query. If you access box.offsetHeight it won’t be an issue. If you, however, change the styles of the box before it’s accessed (e.g. by dynamically adding some CSS class to the element), the browser will have to first apply the style change and then run the layout. This can be very time-consuming and resource-intensive, so avoid it whenever possible.

Optimize the paint

This often is the longest-running of all the tasks so it’s important to avoid it as much as possible. Here is what we can do:

  • Changing any property other than transforms or opacity triggers a paint. Use it sparingly.
  • If you trigger a layout, you will also trigger a paint, since changing the geometry results in a visual change of the element.
  • Reduce paint areas through layer promotion and orchestration of animations.

Rendering is a vital aspect of how SessionStack functions. SessionStack has to recreate as a video everything that happened to your users at the time they experienced an issue while browsing your web app. To do this, SessionStack leverages only the data that was collected by our library: user events, DOM changes, network requests, exceptions, debug messages, etc. Our player is highly optimized to properly render and make use of all the collected data in order to offer a pixel-perfect simulation of your users’ browser and everything that happened in it, both visually and technically.

There is a free plan if you’d like to give SessionStack a try.


Principles for smooth web animations

Source: https://blog.gyrosco.pe/smooth-css-animations-7d8ffc2c1d29

The complete guide to getting 60fps animations with CSS

Since we launched Gyroscopelast year, many people have asked about the JavaScript library we use for our animations. We thought about releasing it to the public, but that’s actually not where the magic happens.

We don’t want people to feel like they’re dependent on some special JavaScript plugin that magically solves these problems. For the most part, we’re just taking advantage of the recent improvements in browser performance, GPU’s and the CSS3 spec.

There is no silver bullet for great animations, besides spending a lot of time testing and optimizing them. However, after years of experimentation and hitting the limits of browser performance, we’ve come up with a series of design & code principles that seem to reliably result in nice animations. These techniques should get you pages that feel smooth, work in modern desktop and mobile browsers, and—most importantly—are easy to maintain.

The technology and implementation will be slightly different for everyone, but the general principles should be helpful in almost any situation.

What is an animation?

Animations have been around since before the internet, and making them great is something you could spend a lifetime learning. However, there are some unique constraints and challenges in doing them for the internet.

For smooth 60fps performance, each frame needs to be rendered in less than 16ms! That’s not very much time, so we need to find very efficient ways to render each frame for smooth performance.

There are dozens of ways to achieve animations on the web. For example, the filmstrip is an approach has been around since before the internet, with slightly different hand-drawn frame being swapped out many times a second to create the illusion of motion.

Twitter recently used this simple approach for their new heart animation, flipping through a sprite of frames.

This effect could’ve been done with a ton of tiny elements individually animating, or perhaps as an SVG, but that would be unnecessarily complex and probably not be as smooth.

In many cases, you’ll want to use the CSS transition property to automatically animate an element as it changes. This technique is also known as “tweening”—as in transitioning between two different values. It has the benefit of being easily cancellable or reversible without needing to build all that logic. This is ideal for “set and forget” style animations, like intro sequences, etc. or simple interactions like hovers.

Further reading: All you need to know about CSS Transitions

In other cases, the keyframe-based CSS animation property may be ideal for continuously running background details. For example, the rings in the Gyroscope logo are scheduled to constantly spin. Other types of things that would benefit from the CSS animation syntax are gear ratios.

So without further ado, here are some tips that will hopefully greatly improve your animation performance…


Don’t change any properties besides opacity or transform!

Even if you think it might be ok, don’t!

Just this one basic principle should get you 80% of the way there, even on mobile. You’ve probably heard this one before—it’s not an original idea but it is seldom followed. It is the web equivalent of “eat healthy and exercise” that sounds like good advice but you probably ignore.

It is quite straightforward once you get used to thinking that way, but may be a big jump for those used to animating traditional CSS properties.

For example, if you wanted to make something smaller, you could usetransform: scale() instead of changing the width. If you wanted to move it around, instead of messing with margins or paddings — which would need to rebuild the whole page layout for every frame — you could just use a simpletransform: translateX or transform: translateY.

Why does this work?

To a human, changing width, margin or other properties may not seem like a big deal — or preferable since it is simpler — but in terms of what the computer has to do they are worlds apart and one is much, much worse.

The browser teams have put a lot of great work into optimizing these operations. Transforms are really easy to do efficiently, and can often take advantage of your graphics card without re-rendering the elements.

You can go crazy when first loading the page — round all the corners, use images, put shadows on everything, if you’re feeling especially reckless you could even do a dynamic blur. If it just happens once, a few extra milliseconds of calculation time doesn’t matter. But once the content is rendered, you don’t want to keep recalculating everything.

Further reading: Moving elements with translate (Paul Irish)


Hide content in plain sight.

Use pointer-events: none along with no opacity to hide elements

This one may have some cross-browser caveats, but if you’re just building for webkit and other modern browsers, it will make your life much easier.

A long time ago, when animations had to be handled via jQuery’s animate(), much of the complexity of fading things in and out came from switching between display: none to block at the right time. Too early and the animation wouldn’t finish, but too late and you’d have invisible zero-opacity content covering up your page. Everything needed callbacks to do cleanup after the animation was finished.

The CSS pointer-events property (which has been around for quite a long time now, but is not often used) basically makes things not respond to clicks or interactions, as if they were just not there. It can be switched on and off easily via CSS without interrupting animations or affecting the rendering/visibility in any way.

Combined with an opacity of zero, it basically has the same effect as display none, but without the performance impact of triggering new renders. When hiding things, I can usually just set the opacity to 0 and turn off pointer-events, and then forget about the element knowing it will take care of itself.

This works especially well with absolutely positioned elements, because you can be confident that they are having absolutely no impact on anything else on the page.

It also gives you a bit more leeway, as the timing doesn’t have to be perfect — it isn’t the end of the world if an element is clickable or covering other things for a second longer than it was visible, or if it only become clickable once it fully faded in.


Don’t animate everything at the same time.

Rather, use choreography.

A single animation may be smooth on its own, but at the same time as a many others will probably mess it up. It is very easy to create a basic demo of almost anything running smoothly — but an order of magnitude harder to maintain that performance with a full site. Therefore, it is important to schedule them properly.

You will want to spread the timings out so everything isn’t starting or running at the exact same time. Typically, 2 or 3 things can be moving at the same time without slowing down, especially if they were kicked off at slightly different times. More than that and you risk lag spikes.

Unless there is literally only one thing on your pages, it is important to understand the concept of choreography. It might seem like a dance term, but it is equally important for animating interfaces. Things need to come in from the right direction and at the right time. Even though they are all separate, they should feel like part of one well-designed unit.

Google’s material design has some interesting suggestions on this subject. It is not the only right way to do things, but something you should be thinking about and testing.

Further reading: Google Material Design · Motion


Slightly increasing transition delays makes it easy to choreograph motion.

Choreographing animations is really important and will take a lot of experimentation and testing to get feeling right. However, the code for it doesn’t have to be very complicated.

I typically change a single class on a parent element (often on body) to trigger a bunch of transitions, and each one has its own own varying transition-delay to come in at the right time. From a code perspective you just have to worry about one state change, and not maintain dozens of timings in your JavaScript.

Animations in the Gyroscope Chrome Extension

Staggering a series of elements is an easy and simple way to choreograph your elements. It’s powerful because it simultaneously looks good while also buying you precious performance—remember you want to have only a few things happening at the same time. You’ll want to spread them out enough that each one feels smooth, but not so much that the whole thing feels too slow. Enough should be overlapping that it feels like a continuous flow rather than a chain of individual things.

Code Sample

There are a couple simple techniques to stagger your elements—especially if it is a long list of things. If there are less than 10 items, or a very predictable amount (like in a static page), then I usually specify the values in CSS. This is the simplest and easiest to maintain.

A simple SASS loop

For longer lists or very dynamic content, the timings can be set dynamically by looping through each item.

A simple javascript loop

There are typically two variables: your base delay and then the time delay between each item. It is a tricky balance to find, but when you hit the right set of numbers it will feel just perfect.


Use a global multiplier to design in slow motion

And then speed everything up later.

With animation design, timing is everything. 20% of the work will be implementing something, and the other 80% will be finding the right parameters & durations to get everything in sync and feeling smooth.

Especially when working on choreography of multiple things, and trying to squeeze performance and concurrency out of the page, seeing the whole thing go in slow motion will make it a lot easier.

Whether you’re using Javascript, or some sort of CSS preprocessor like SASS (which we love), it should be fairly straightforward to do a little extra math and build using variables.

You should make sure it is convenient to try different speeds or timings. For example, if an animation stutters even at 1/10 speed, there might be something fundamentally wrong. If it goes smoothly when stretched out 50x, then it is just a matter of finding the fastest speed it will run at. It may be hard to notice 5-millisecond issues at full speed, but if you slow the whole thing down they will become extremely obvious.

Especially for very complex animations, or solving tricky performance bottle necks, the ability to see things in slow motion can be really useful.

The main idea is you want to pack a lot of perfect details while it is going slow, and then speed the whole thing up so it feels perfect. It will be very subtle but the user will notice the smoothness and details.

This feature is actually part of OS X—if you shift-click the minimize button or an app icon, you’ll see it animate in slow motion. At one point, we even implemented this slow-motion feature on Gyroscope to activate when you press shift.


Take videos of your UI and replay them to get a valuable third-person perspective.

Sometimes a different perspective helps you see things more clearly, and video is a great way to do this.

Some people build a video in after effects and then try to implement that on a site. I often end up going the other way around, and try to make a good video from the UI of a site.

Being able to post a Vine* or video of something is a fairly high bar. One day I was excited about something I built, and tried to make a recording to share with some friends.

However, when I watched it again I noticed a bunch of things that were not great. There was a big lag spike and all the timings were slightly wrong. It made me cringe a bit and instead of sending it I realized there was a lot more work to do.

It is easy to gloss over these while you’re using it in realtime, but watching animations on video — over and over again or in slow motion — makes any issues extremely obvious.

They say the camera adds 10 pounds. Perhaps it also adds 10 frames.

It has now become an important part of my workflow to watch slow-motion videos of my pages and make changes if any of the frames don’t feel right. It’s easy to just blame it on slow browsers, but with some more optimization and testing it’s possible to work through all of those problems.

Once you’re not embarrassed by catching lag spikes on video, and feel like the video is good enough to share, then the page is probably ready to release.


Network activity can cause lag.

You should preload or delay big HTTP requests

Images are a big culprit for this one, whether a few big ones (a big background perhaps) or tons of little ones (imagine 50 avatars loading), or just a lot of content (a long page with images going down to the footer).

When the page is first loading, tons of things are being initialized and downloaded. Having analytics, ads, and other 3rd party scripts makes that even worse. Sometimes, delaying all the animations by just a few hundred milliseconds after load will do wonders for performance.

Don’t over-optimize for this one until it becomes necessary, but a complicated page might require very precise delays and timings of content to run smoothly. In general, you’ll want to load as little data as possible at the beginning, and then continue loading the rest of the page once the heavy lifting and intro animations are done.

On pages with a lot of data, the work to get everything loaded can be considerable. An animation that works well with static content may fall apart once you start loading it with real data at the same time. If something seems like it should work, or sometimes works smoothly and other times doesn’t, I would suggest checking the network activity to make sure you aren’t doing other stuff at the same time.


Don’t bind directly to scroll.

Seems like a cool idea, but it really isn’t great.

Scrolling-based animations sometimes have gained a lot of popularity over the last few years, especially ones involving parallax or some other special effects. Whether or not they are good design is up for debate, but there are better and worse ways to technically implement them.

A moderately performant way to do things in this category is to treat reaching a certain scroll distance as an event — and just fire things once. Unless you really know what you’re doing, I would suggest avoiding this category since it is so easily to go wrong and really hard to maintain.

Even worse is building your own scroll bar functionality instead of using the default one—aka scrolljacking. Please don’t do this.

This is one of those rules that is especially useful for mobile, but also probably good practice for the ideal user experience.

If you do have a specific type of experience you want that is focused on scrolling or some special events, I would suggest building a quick prototype of it to make sure that it can perform well before spending much time designing it.


Test on mobile early & often.

Most websites are built on a computer, and likely tested most often on the same machine they’re built on. Thus the mobile experience & animation performance will often be an afterthought. Some technologies (like canvas) or animation techniques may not perform as well on mobile.

However, if coded & optimized properly (see rule #1), a mobile experience can be even smoother than on a computer. Mobile optimization was once a very tricky subject, but new iPhones are now faster than most laptops! If you’ve been following the previous tips, you may very well end up with great mobile performance out of the box.

Mobile usage will be a large and very important part of almost any site. It may seem extreme, but I would suggest viewing it exclusively from your phone for a whole week. It shouldn’t feel like a punishment to be forced to use the mobile version, but often it will.

Keep making design improvements & performance enhancements until it feels just as polished and convenient as the big version of the site.

If you force yourself to only use your mobile site for a week, you will probably end up optimizing it to be an even better experience than the big one. Being annoyed by using it regularly is worth it though, if it means that the issues get fixed before your users experience them!


Test frequently on multiple devices

Screen size, density, or device can all have big implications

There are many factors besides mobile vs desktop that can drastically affect performance, like whether a screen is “retina” or not, the total pixel count of the window, how old the hardware is, etc.

Even though Chrome and Safari are both Webkit based browsers with similar syntax, they also both have their own quirks. Each Chrome update can fix things and introduce new bugs, so you need to constantly be on your toes.

Of course, you don’t only want to build for the lowest common denominator, so finding clever ways to progressively add or remove the enhancements can be really useful.

I regularly switch between my tiny MacBook Air and huge iMac, and each cycle reveals small issues and improvements to be made — especially in terms of animation performance but also for overall design, information density, readability, etc.

Media queries can be really powerful tools to address these different segments—styling differently by height or width is a typical use of media queries, but they can can also be used to add targeting by pixel density or other properties. Figuring out the OS and type of device can also be useful, as mobile performance characteristics can be very different than computers.

I hope you’ll find these techniques useful in your next project. Good luck!

Free SEO tools

Source: https://www.verticalresponse.com/blog/6-free-seo-tools-to-boost-your-search-engine-rankings/

Have you ever wondered how to get your website to come up on the first page of search results? Of course you have. Every small business wants to be found online, but it isn’t always an easy task.

Website traffic doesn’t follow the Field of Dreams mantra, “If you build it, they will come.” You can create a killer website, but if you aren’t using search engine optimization (SEO) techniques, your online bleachers will remain empty.

What’s SEO? It’s a way to improve your website’s visibility, so it appears in search results. By making specific changes to your website you can organically increase traffic and please the Google Gods, so your site is listed when people search using certain keywords.

To help boost your rankings without calling in a webmaster, here are six free SEO tools for the time-strapped business owner:

1. Google Trends

Google Trends is a go-to keyword tool. You can see how search queries change over time when people search for your keyword and compare different words or phrases to see which is best.

Let’s say you run a hardware store and you want to ramp up sales of shovels this winter. When people search for a shovel online, do they search for winter shovel or snow shovel? Compare the two using Google Trends. Here’s what you’ll see:


According to the chart, people search for snow shovel more frequently than winter shovel. The chart also shows you when people search for the term. In this case, it’s no surprise that the winter months are when this term is most popular.

You can also take a look at a regional breakdown that shows you where the search terms are most popular.

With this knowledge, you can use the phrase ‘show shovel’ on your website and blog posts to increase traffic.


This tool shows you how a search engine sees your site. It strips your site down to a base level, without any fancy fonts, headers or images, and displays relevant SEO information. By looking at your site this way, you can see what needs improvement.

All you have to do is enter your URL into the site, no additional downloads necessary.

3. Screaming Frog

What SEO problems does your website face? Aren’t sure? Turn to Screaming Frog. Free for the first 500 URLs, this tool crawls your site looking for SEO roadblocks and provides a report of problem areas.

The tool looks for broken links, missing metadata, oversized files and pictures, duplicate pages and internal links, just to name a few. Think of it as an SEO audit. Use the results to improve your site and SEO.

4. GTmetrix

How fast does your website load? Do you have a page or two on your site that takes too long to come up? Sluggish page speed can hinder SEO. Site speed does play a role in search engine rankings, so you’ll want to double check the speed of your site with GTmetrix.

Just enter your URL into the site and you’ll get a page speed score and a list of ways to improve it. For example, it might suggest resizing images to improve load times.

5. Rank Checker

Where does your website land in search engine results? Find out with Rank Checker. This tool will show you where your site shows up and give you tips to improve it.

You can install a button on your toolbar so you have easy access to this information whenever you’d like. It will take time to move your site up the ranks, but with this tool you can keep an eye on where you stand.

6. Responsive Design Test

How does your site look on a smartphone? Search engines give preferential treatment to websites that look great on all devices, no matter their size or orientation.

To make sure your website looks sharp on every device, use a responsive website design. This design adapts to every device, so you don’t need to create multiple sites.

Not sure if you have a responsive design? Put your website into the Responsive Design Test to find out. If you don’t have a responsive design, consider updating your site or getting help from professional designers at our partner Deluxe.

Components of an Informative SEO Audit

Source: https://www.stonetemple.com/15-crucial-elements-of-an-informative-seo-audit/

In search engine optimization, auditing a website is a critical first step to understanding where the site is at today, and how to make critical improvements to it.

In this post I’m going to walk you through many of the most critical elements of a basic audit. Note that there is much more that you can do, so don’t treat these 15 items as a hard limit on how far you choose to go with your audits!

When we start an audit with a client website, I’m fond of telling them that I hope their site is in horrible shape. It may be non-intuitive, but the worse shape the site is currently in, the better off they are.

After all, it means that the audit will offer more upside to their business. At Stone Temple, we’ve done audits that have led to more than doubling the traffic of a client’s site.

Implementing the recommendations of a good #SEO audit is often enough to significantly raise traffic.Click To TweetAn SEO audit can happen at any time in the lifecycle of a website. Many choose to do one during critical phases, like prior to a new website launch or when they’re planning to redesign or migrate an existing website.

However, audits can be an often-overlooked piece of a website’s strategy. And many don’t realize just how much the technical back end of a website impacts their SEO efforts moving forward.

What Are the Fundamental Components of an SEO Audit?

In a nutshell, here are the basic elements of any SEO Audit (click to jump to that section):

  1. Discoverability
  2. Basic Health Checks
  3. Keyword Health Checks
  4. Content Review
  5. URL Names
  6. URL Redirects
  7. Meta Tags Review
  8. Sitemaps and Robots.txt
  9. Image Alt Attributes
  10. Mobile Friendliness
  11. Site Speed
  12. Links
  13. subdomains
  14. Geolocation
  15. Code Quality

The SEO Audit – in Detail

Now, let’s look at the basic crucial elements of auditing a website from an SEO perspective in lot more detail …

1. Discoverability

You want to make sure you have a nice, accessible site for search engines crawlers. This means that a site’s content is available in HTML form, or relatively easy to interpret JavaScript. For example, Adobe Flash files are difficult for Google to extract information from, though Google has said that it can extract some information.

Part of having an accessible website for search engines and users is the information architecture on a site—how the content and “files” are organized. This helps search engines make connections between concepts and helps users find what they are looking for with ease.

To think about how to do this well, it’s helpful to compare it to how you deal with paper files in your office:

Website architecture is like a good office filing system

A well-organized site hierarchy also helps the search engines better understand the semantic relationships between the sections of the site. This gets reinforced by other key site elements like XML Sitemaps, HTML site maps and breadcrumbs, all of which can help neatly tie the overall site structure together.

Well-structured site architecture helps search engines understand your site.Click To Tweet

2. Basic Health Checks

Basic health checks can provide quick red flags when a problem emerges, so it’s good to do these on a regular basis (even more often than you do a full audit). Here are four steps you can take to get a diagnosis of how a website is doing in the search engine results:

  1. Ensure Google Search Console and Bing Webmaster Tools accounts have been verified for the domain (and any subdomains, for mobile or other content areas). Google and Bing also offer site owner validation that allows you to see how the search engines view a site. Then, check these on a regular basis to see if you’ve received any messages from the search engine. If the site has been hit by a penalty from Google, you’ll see a message, and you’ll want to get to that as soon as possible. They’ll also let you know if the site has been hacked.
  2. Find out how many of a website’s pages appear to be in the search index. You can do this by going to Google Search Console as follows:search console indexingHas this number changed in an unexpected way since you last saw it? Sudden changes could indicate a problem. Also, does it seem like it matches up approximately with the number of pages you think exist?

    I wouldn’t worry about it being 20 percent smaller or larger than you think, but if it’s double, triple or more, or only about 20 percent of the site, you probably want to understand why.

  3. Go into Google Search Console to make sure the cached versions of a website’s pages look the same as the live versions. Below you can see an example of this using a page on the Stone Temple web site.Google Search Console fetch and render
  4. Test searches of the website’s branded terms to make sure the site is ranking for them. If not, it could indicate a penalty. Check the Google Search Console/Bing Webmaster Tools accounts to see if there are any identifiable penalties.

Learn how to do a basic site health check as part of an #SEO audit.Click To Tweet

3. Keyword Health Checks

You’ll want to perform an analysis of the keywords you’re targeting on the site. This can be accomplished by many of the various SEO tools available. One thing to look for in general is if more than one page is targeting or showing up in the search results for the same keyword (aka “keyword cannibalization”).

You can also use Search Console to see what keywords are driving traffic to the site. If you see critical keywords that used to receive traffic are no longer working (the rankings dropped) that could be a sign of a problem.

On the positive site of the ledger, look for “striking distance” keywords, those that rank in positions from five to 20 or so. These might be keywords where some level of optimization could move them up in the rankings. If you can move from position five to three or 15 to eight on a major keyword, that could result in valuable extra traffic and provide reasonably high ROI for the effort involved.

For great #SEO opportunities, look for striking distance keywords.Click To Tweet

4. Content Review

Here, we’re looking for a couple things:

  1. Content depth, quality and optimization: Do the pages have enough quality information to satisfy a searcher? You want to make sure the number of pages with little or “thin” content is small compared to those with substantial content. There are many ways to generate thin content.One example is a site that has image galleries with separate URLs for each image. Another is a site with city pages related to their business in hundreds, or thousands, of locations where they don’t do business, and where there is really no local aspect to the product or services they are offering on their site. Google has no interested in indexing all those versions, so you shouldn’t be asking them to do so!This is often one of the most underappreciated aspects of SEO. At Stone Temple, we’ve taken existing content on pages and rewritten it, and seen substantial traffic lifts. In more than one case, we’ve done this on more than 100 pages of a site and seen traffic gains of more than 150 percent!
  2. Duplicate content: A lot of websites have duplicate content without even realizing it. One of the first things to check is that the “www” version of the site and the “non-www” version do not exist at the same time (do they both resolve?). This can also happen with “http” and “https” versions of a site. Pick one version and 301 redirect the other to it. You can also set the preferred domain in Google Search Console (but still do the redirects even if you do this).
  3. Ad Density: Review the pages of your site to assess if you’re overdoing it with your advertising efforts. Google doesn’t like sites that have too many ads above the fold. A best practice to keep in mind is that the user should be able to get a substantial amount of the content they were looking for above the fold.

A thorough content review is an essential part of any #SEO audit.Click To Tweet

5. URL Names

Website URLs should be “clean,” short and descriptive of the main idea of the page and indicate where a person is at in the website. So, make sure this is part of the SEO audit. Ensuring URLs are constructed well is helpful for both website users and search engines to orient themselves.

For example: http://www.site.com/outerwear/mens/hats

URLs should be clean, short, and descriptive of the page main idea.Click To TweetIt’s a good idea to include the main keyword for the web page in the URL, but never try to keyword-stuff (for example, http://www.site.com/outerwear/mens/hat-hats-hats-for-men).

Another consideration are URLs that have tracking parameters on them. Please don’t ever do this on a website! There are many ways to implement tracking on a site, and using parameters in the URLs is the worst way to do this.

If a website is doing this today, you’ll want to go through a project to remove the tracking parameters from the URLs, and switch to some other method for tracking.

On the other hand, perhaps the URLs are only moderately suboptimal, such as this one:


In cases like this, I don’t think that changing the URLs is that urgent. I’d wait until you’re in the midst of another larger site project at the same time (like a redesign).

6. URL Redirects

It’s a common best practice to ensure that a web page that no longer needs to exist on a website be redirected to the next most relevant live web page using a 301 redirect. There are other redirect types that exist as well, so be sure to understand the various types and how they function before using any of them.

Be sure to redirect pages that no longer need to be indexed in search to more useful pages.Click To TweetGoogle recommends that you use 301 redirects because they indicate a page has permanently moved from one location to another, and other redirects, such as a 302, are used to signal that the page relocation is only temporary. If you use the wrong type of redirect, Google may keep the wrong page in their index.

It used to be the case that much less than 100 percent of the PageRank transferred to the new page through a redirect. In 2016, however, Google came out with a statement that there would be no PageRank value lost using any of the 3XX redirects.

To help check redirects, you can use tools like Redirect Check or RedirectChecker.org.

Redirect check

7. Meta Tags Review

Each and every web page on a site should have unique title tags and meta descriptions tags—the tags that make up the meta information that helps the search engines understand what the page is about.

Make sure every page on your site has unique title and description tags.Click To TweetThis gives the website the ability to suggest to the search engines what text to use as the description of its pages in the search results (versus search engines like Google generating an “autosnippet,” which may not be as optimal).

It may also help avoid some pages of the website from being filtered out of the search results if search engines use the meta information to help detect duplicate content.

You’ll also want to take this opportunity to check for a robots metatag on the pages of the site. If you find one, there could be trouble. For example, an unintentional “noindex” or “nofollow” value could adversely affect your SEO efforts.

8. Sitemaps and robots.txt Verification

It’s important to check the XML Sitemap and robots.txt files to make sure they are in good order. Is the XML Sitemap up to date? Is the robots.txt file blocking the crawling of sections of a site that you don’t want it to? You can use a feature in the Google Search Console to test the robots.txt file. You can also test and add a Sitemap file there as well.

9. Image Alt Attributes

Alt attributes for the images on a website help describe what the image is about. This is helpful for two reasons:

  1. I. Search engines cannot “see” image files the way a human would, so they need extra data to understand the content of the image.
  2. II. Web users with disabilities, like those who are blind, often use screen-reading software that will help describe the elements on a web page, images being one of them, and these programs make use of the alt attributes.

It doesn’t hurt to use keyword-rich descriptions in the attributes and file names when it’s relevant to the actual image, but you should never keyword-stuff.

10. Mobile Friendliness

The amount of people that are searching and purchasing on their mobile devices is growing each year. At Stone Temple, we have clients who get more than 70 percent of their traffic from mobile devices. Google has seen this coming for a long time, and has been pushing for websites to become mobile friendly for years.

Because the mobile device is such a key player in search today, at the time of writing, Google has declared it will have a mobile-first index. What that means is that it will rank search results based on the mobile version of a website first, even for desktop users.

One key aspect of a mobile-first strategy from Google is that its primary crawl will be of the mobile version of a website, and that means Google will be using the mobile crawl to discover pages on a site.

Most companies have built their desktop site to aid Google in discovering content, and their mobile site purely from a UX perspective. As a result, the crawl of a mobile site might be quite poor from a content discovery perspective.

Make sure to include a crawl of the mobile site as a key part of any audit of a site. Then compare the mobile crawl results with the crawl of the desktop site.

It is now essential for an SEO audit to include a mobile crawl of your site.Click To TweetIf a website doesn’t have a mobile version, Google has said it will still crawl and rank the desktop version; however, not having mobile-friendly content means a website may not rank as well in the search results.

While there are a few different technical approaches to creating a mobile-friendly website, Google has recommended that websites use responsive design. There’s plenty of documentation on how to do that coming directly from Google, as well as tools that can help gauge a website’s mobile experience, like this one.

It’s worth mentioning Google’s accelerated mobile pages (AMP) here as well. This effort by Google is to give website publishers the ability to make their web content even faster to users.

While Google has said that AMP pages won’t receive a boost in ranking at the time of writing, page speed is, however, a signal. The complexity of the technical implementation of AMP pages is one of the reasons some may choose not to explore it.

Another way to create mobile experiences is via progressive web apps, which is an up-and-coming way to provide mobile app-like experiences on the web via the browser (without having to download an app).

The main benefit is the ability to access specific parts of a website in a way similar to what traditional apps can.

11. Site Speed

Site speed is one of the signals in Google’s ranking algorithm. Slow load times can cause the crawling and indexing of a site to be slower, and can increase bounce rates on a website.

Historically, this has only been a ranking factor when site speeds were very slow, but Google has been making noise that it will become more important over time. Google’s John Mueller has also indicated that a site that is too slow, and which is nominally mobile-friendly, may now be deemed as non-mobile friendly. However, currently, mobile-age speed is not currently treated by Google as a ranking factor.

Site speed will become increasingly important as a search factor. Are you ready?Click To TweetIn fact, site speed has become such an important element of the overall user experience, especially in mobile, that Google has said it wants above-the-fold content for mobile users to render in one second or less.

To help people get more visibility into site speed, Google offers tools such as the PageSpeed Insights tool and the site speed reports found in Google Analytics.

12. Links

Here, we’re looking at links in a couple different ways: internal links (those on the website itself) and external links (other sites linking to the website).

Internal Links
First, look for pages that have excessive links. You may want to minimize those. Second, make sure the web pages use anchor text intelligently without abusing it or it could look spammy to search engines. For example, if you have a link to the home page in the global navigation, call it “Home” instead of picking your juiciest keyword.

Internal links are what define the overall hierarchy of a site. The site might, for example, look like this:

perfectly structured site

The site above obviously has a well-defined structure, and that’s good. But in practice, sites rarely look like this, and some level of deviation from this is perfectly fine.

A home page may link directly to some of the company’s top products, as shown in Level 4 of the image, and that’s fine. However, it’s a problem if the site has a highly convoluted structure that has many pages that can only be reached after a large number of clicks if you try to navigate to them from the home page, or if each page is linking to too many other pages.

Look for these types of issues and try to resolve them to create something with a cleaner hierarchy.

Make sure your site has a clean internal link structure. Click To TweetExternal links
Also known as inbound links or backlinks, you’ll want to perform an analysis to ensure there aren’t any problems there, like a history of purchased links, irrelevant links and links that looks spammy.

You can use tools like Open Site Explorer, Majestic SEO, Ahrefs Site Explorer, SEMRush, and the Google Search Console/Bing Webmaster Tools accounts to collect data about links.

Personally, I like to use all of these sources, collect all of their output data, dedupe it and build one master list. None of the tools provides a complete list, so using them all will get you the best possible picture.

Look for patterns in the anchor text, like if too many of the links have a critical keyword for the site in them. Unless the critical keyword happens to also be the name of the company, this is a sure sign of trouble.

Also check that there are links to pages other than the home page. Too many of these are another sure sign of trouble in the backlink profile. Lastly, check how the backlink profile for the site compares to the backlink profiles of its major competitors.

Make sure that there are enough external links to the site, and that there are enough high-quality links in the mix.

13. Subdomains

Historically, it’s been believed that subdomains do not benefit from the primary domain’s full trust and link authority. This was largely due to the fact that a subdomain could be under the control of a different party, and therefore in the search engine’s eyes, it needed to be separately evaluated.

For an example of a domain that allows third parties to operate subdomains of their site, consider Blogger.com, that allows people to set up their own blogs and operate them as subdomains of Blogspot.com.

For the most part, this is not really true today, and search engines are extremely good at recognizing whether or not the subdomain really is a part of the main domain, or if it’s independently operated.

I still recommend using a subfolder over a subdomain as the default approach to adding new categories of content to a site., However, if you already have it on a subdomain, I would not move it to a subfolder unless you have clear evidence of a problem, as there is a cost to site moves, and the upside of making the move is not enough to pay that cost.

You should prefer subfolders over subdomains when structuring a new site. Click To TweetFor purposes of an audit, you need to make sure you include subdomains within the audit. As part of this, make sure your crawl covers them, and check analytics data to see if there is any clear evidence of a problem, such as it’s getting very little traffic, or recent material traffic drops.

For more on subdomains and their effect on SEO, see Everything You Need to Know About Subfolders, Subdomains, and Microsites for SEO.

14. Geolocation

For websites that aim to rank locally, for example, a chiropractor that’s established in San Francisco and wants to be found for “San Francisco chiropractor,” you’ll want to consider things like making sure the business address is on every page of the site, and claiming and ensuring the validity of the Google Places listings.

Beyond local businesses, websites that target specific countries or multiple countries with multiple languages have a whole host of best practice considerations to contend with.

These include things like understanding how to use hreflang tags properly, and attracting attention (such as links) from within each country where products and services are sold by the business.

15. Code Quality

A website with clean code that allows the search engines to crawl it with ease enhances the experience for the crawlers. W3C validation is the “gold standard” for performing a checkup on the website’s code, but is not really required from an SEO perspective (if search engines punished sites for poor coding practices, there might not be much left to show in the search results). Nonetheless, clean coding improves maintainability of a site, and reduces the chances of errors (including SEO errors) creeping into the site.


An SEO audit can occur at any stage of the lifecycle of a website, and can even be performed on a periodic basis, like quarterly or annually, to ensure everything is on the up and up.

While there are different approaches to performing an SEO audit, the steps listed in this article serve as a solid foundation to getting to know the site better and how it can improve, so your SEO efforts get the most ROI.

This article adapted from the book The Art of SEO: Mastering Search Engine Optimization (3rd Edition), Eric Enge lead co-author.


ASP.NET authorisation

Resource: https://weblogs.asp.net/gurusarkar/setting-authorization-rules-for-a-particular-page-or-folder-in-web-config

I have seen so many people asking again and again how to give allow access to particular page to a person or roles. So I thought its good to put this in one place. I will discuss how to configure web.config depending on the scenario.

We will start with a web.config without any authorization and modify it on case by case bassis.

No Authorization

We will start with the root web.config without any authorization.

<system.web><authentication mode=Forms>

</authentication> </system.web></configuration>

Deny Anonymous user to access entire website

This is the case when you want everybody to login before the can start browsing around your website. i.e. The first thing they will see is a login page.
<system.web><authentication mode=Forms>


<deny users=?/> //will deny anonymous users </authorization></system.web>
The above situation is good when user don’t have to register themselves but instead their user account is created by some administrator.

Allow access to everyone to a particular page

     Sometimes you want to allow public access to your registeration page and want to restrict access to rest of the site only to logged / authenticated users .i.e. do not allow anonymous access. Say your registration page is called register.aspx in your site’s root folder. In the web.config of your website’s root folder you need to have following setup.


<authentication mode=Forms/>

<authorization> <deny users=?/>  //this will restrict anonymous user access</authorization>

<location path=register.aspx> //path here is path to your register.aspx page e.g. it could be ~/publicpages/register.aspx
<authorization><allow users=*/> // this will allow access to everyone to register.aspx


Till now we saw either allow users or to authenticated users only. But there could be cases where we want to allow particular user to certain pages but deny everyone else (authenticated as well as anonymous). 

To allow access to particular user only and deny everyone else

      Say you want to give access to user “John” to a particular page e.g. userpersonal.aspx and deny all others the location tag above should look like below:

<location path=userpersonal.aspx>
<authorization><allow users=John/> // allow John ..note: you can have multiple users seperated by comma e.g. John,Mary,etc

<deny users=*/>  // deny others</authorization>

Allow only users in particular Role
Here I am will not show how to setup roles. I assume you have roles managment setup for users. We will see now what needs to be done in web.config to configure authorization for a particular role. e.g You have two roles. Customer and Admin and two folders CustomerFolder and AdminFolder. Users in Admin role can access both folders. Users in Customers role can access only CustomerFolder and not AdminFolder. You will have to add location tags for each folder path as shown below:
<location path=AdminFolder>

<authorization><allow roles=Admin/> //Allows users in Admin role

<deny users=*/> // deny everyone else</authorization>


<location path=CustomerFolder>

<authorization><allow roles=Admin, Customers/> //Allow users in Admin and Customers roles

<deny users=*/> // Deny rest of all</authorization>


Alternate way – using individual web.config for each Folder
Alternative to above mentioned method of using tag, you can add web.config to each folder and configure authorization accordingly almost similar to one show above but not using location tag. Taking same eg. as above. Add web.config to both the folders – AdminFolder and CustomerFolder.

Web.config in AdminFolder should look like:


<allow roles=Admin/> //Allows users in Admin role<deny users=*/> // deny everyone else


Web.config in CustomerFolder should look like: 

roles=Admin, Customers/> //Allow users in Admin and Customers roles users=*/> // Deny rest of all

Images and CSS files

Say you have all your images and CSS in a seperate folder called images and you are denying anonymous access to your website. In that case you might see that on your login page you cannot see images(if any) and css(if any) applied to your login page controls.

In that case you can add a web.config to the images and css folder and allow access to everyone to that folder. So your web.config in images folder should look as below:

<authorization><allow users=*/> //Allow everyone


Common Mistakes

I have seen people complaining that they have setup their roles correctly and also made entry to their web.config but still their authorization doesn’t work. Even they have allowed access to their role that user cannot access particular page/folder. The common reason for that is placing before .

Say the web.config from AdminFolder as we have seen before is something like this:
//This web.config will not allow access to users even they are in Admin Role 


<deny users=*/> // deny everyone else

<allow roles=Admin/> //Allows users in Admin role</authorization>
Since the authorization is done from top to bottom, rules are checked until a match is found. Here we have <deny users=*/> first and so it will not check for allow any more and deny access even if in Admin role.

So PUT all allows BEFORE ANY deny.

NOTE: deny works the same way as allow. You can deny particular roles or users as per your requirement.

Update: Issue with IIS 7

With IIS 7 you will have to give access to IUSR Anonymous user account to your folder that contains your css or images files. Check resource below.

I hope this will answer some of the question regarding how to authorize pages / folders(directories).

Comments welcome.