Usman Farooq is one of the new emerging SEO/SEM experts, With his diverse skills and Search engine knowledge his aim is to reach high in the sky getting himself to be known round the globe for his Search engine optimization and Search engine marketing techniques and providing customer satisfactions at its best.
Thursday, June 24, 2010
Bing It? “Bring It,” Says Google
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***"
SearchCap: The Day In Search, June 23, 2010
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***"
Viacom Loses Google/YouTube Lawsuit
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***"
Four Search Agencies Merge To Form BlueGlass Interactive
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***
"
Twitter’s First Head Of State Visit: Russian President Dmitry Medvedev
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***"
Two Advanced Tactics For PPC Copywriting
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***
"
Tuesday, June 22, 2010
SearchCap: The Day In Search, June 22, 2010
From Search Engine Land:
Google, Twitter Argue Against Throttling Speed Of News
Google and Twitter have teamed up to support a web site that’s facing legal trouble for reporting news too quickly.
Reuters reports that Google and Twitter filed [...]
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***
Bing Entertainment Unwrapped: Music, Movies, Games & TV
The official Bing blog has all the details about the new features in all aspects of entertainment: Music, Movies, TV listings and gaming information.
At [...]
*** Read the full post by clicking on the headline above or, in Facebook, by clicking on the 'View Original Post' link below. ***
Whiteboard Friday - What's Working for You? with Richard Baxter
Posted by Scott Willoughby
The avalanche-like flow of special guest Whiteboard Fridays continues this week with another installment featuring our beloved London SEO expert, Richard Baxter (anchor text, y'all). Last week Richard helped us all learn how to get our fresh content indexed licketty-split, and this week he's back to help us learn how to identify which areas of our sites are working hardest for us.
Whether you have multiple types of content on your site (maybe a blog, tools, articles, etc.), or you have limited content types across different topics (blog posts about cats, kittens, evil cats, ninja kittens, evil ninja kitten cats, etc.), wouldn't it be nice to know which content types or topics bring you the most and best traffic? Never fear, Richard's here to explain his handy-dandy system to do just that! By the end of this video you'll know exactly which stats to pull from your analytics to create a so-shiny-it's-practically-chromed spreadsheet that will let you peer deep into the inky black heart of your site and know the stars, the slackers, and the shiftless hobos among your content.
Wow! It's like the future is now! And, since thinking of the future always makes me think of 'Flash', and thinking of 'Flash' reminds me that those of you without Adobe Flash can't watch the video, I'll try to summarize Richard's bard-like musings on content segmentation and performance analysis.
In order to track and analyze the performance of your individual content, you'll want to segment out your analytics data by content type. This is really, really easy to do if you have good, clean site structure (which you have, right? RIGHT?!). You can just pull Richard's data points (below) for the different sections or subfolders of your site. If you were lazy and thought the best way to organize your site was to throw all of the pages into a virtual bucket, dump them out, name them by throwing your keyboard at a stump, and call it a day, you'll have to get a little more involved with how you filter your segments. No matter what though, you might consider segments like all blog posts (perhaps a 'CONTAINS /blog' filter), all tools, all content written by Belverd Needles, III (/authors/belverd), etc.
Once you have your segment filters in place, you just need to pull the data that Richard suggests and you'll be able to see exactly how Belverd's content compares to that of his bloggitty arch-nemesis, Marmaduke Huffsworth, Esq. (/authors/marmaduke). What data you say? This data:
1. Number of Pages per Segment Richard advocates crawling your site using something like Link Sleuth to get this number; you'll use it for all sorts of fun calculations. Yes, calculations can be fun. If you don't believe me, just ask these racially diverse, embroidered youths.
2. Number of Keywords Sending Traffic You can pull this from your analytics. Don't worry so much about the words themselves here, you just want to know how many different keyword terms are delivering one or more visits to each segment.
3. Number of Pages Getting Entries from Search Engines How many pages within the segment received one or more visits from a search engine (pick an engine, any engine, or all of them, whatever matters to you...so Google, basically).
4. Total Visits from Google Search Engines Like it says on the tin, this is just the total number of visits to the segment from search traffic.
5. Percentage of Total Visits that Performed a Conversion Action This will require that you have some conversion actions setup in your analytics, but it's a key data point if you want to figure out your strongest content.
So what can all of this stuff tell you? LOTS! By tracking these numbers, you'll be able to quickly identify which content is working hardest for you. You'll be able to know whether Marmaduke or Belverd is better at drawing high-converting traffic. You'll know which subjects and content types are most deserving of your precious time and the investment of your hard-bilked pennies. You'll know who put the bop in the bop shoo bop, who moved your cheese, and why birds suddenly appear every time I'm near (it's because my pockets are full of birdseed). You'll be 12.7-29.4% awesomier than you were before, and you'll smell delightful ALL THE TIME!
Now aren't you glad Richard stopped by and shared his magic secrets with you? Thanks, Richard!
p.s. Richard has posted more about getting things indexed quickly w/ PubSubHubBub and more on his blog - well worth a read.
Amazon Web Services: Clouded by Duplicate Content
Posted by Stephen Tallamy
This post was originally in YOUmoz, and was promoted to the main blog because it provides great value and interest to our community. The author's views are entirely his or her own and may not reflect the views of SEOmoz, Inc.
At the end of last year the website I work on, LocateTV, moved into the cloud with Amazon Web Services (AWS) to take advantage of increase flexibility and reduced running costs. A while after we switched I found that Googlebot was crawling the site almost twice as much as it used to. Looking into it some more I found that Google had been crawling the site from a subdomain of amazonaws.com.
The problem is, when you start up a server on AWS it automatically gets a public DNS entry which looks a bit like ec2-123.456.789.012.compute-1.amazonaws.com. This means that the server will be available through this domain as well as the main domain that you will have registered to the same IP address. For us, this problem doubled itself as we have two web servers for our main domain and hence the whole of the site was being crawled through two different amazonaws.com subdomains and www.locatetv.com.
Now there were no external links to these AWS subdomains but, being a domain registrar, Google was notified of the new DNS entries and went ahead and indexed loads of pages. All this was creating extra load on our servers and a huge duplicate content problem (which I cleaned up, after quite a bit of trouble - more below).
A pretty big mess.
I thought I'd do some analysis into how many other sites were being affected by this problem. A quick search on Google for site:compute-1.amazonaws.com and site:compute.amazonaws.com reveals almost 1/2 million web pages indexed (often dodgy stats with this command but it gives some scale of the issue):
My guess is that most of these pages are duplicate content with the site owners having separate DNS entries for their site. Certainly this is the case for the first few sites I checked:
- http://ec2-67-202-8-9.compute-1.amazonaws.com is the same as http://www.broadjam.com
- http://ec2-174-129-207-154.compute-1.amazonaws.com is the same as http://www.elephantdrive.com
- http://ec2-174-129-253-143.compute-1.amazonaws.com is the same as http://boxofficemojo.com
- http://ec2-174-129-197-200.compute-1.amazonaws.com is the same as http://www.promotofan.com
- http://ec2-184-73-226-122.compute-1.amazonaws.com is the same as http://www.adbase.com
For Box Office Mojo, Google is reporting 76,500 pages indexed for the amazonaws.com address. That's a lot of duplicate content in the index. A quick search for something specific like "Fastest Movies to Hit $500 Million at the Box Office" shows duplicates from both domains (plus a secure subdomain and the IP address of one of their servers - oops!):
Whilst I imagine Google would be doing a reasonable job of filtering out the duplicates when it comes to most keywords, it's still pretty bad to have all this duplicate content in the index and all that wasted crawl time.
This is pretty dumb for Google (and other search engines) to be doing. It's pretty easy to work out that both the real domain and the AWS subdomain resolve to the same IP address and that the pages are the same. They could be saving themselves a whole lot of time time crawling URLs that are due to a duplicate DNS entry.
Fixing the source of the problem.
As good SEOs we know that we should do whatever we can to make sure that there is only one domain name resolving to a site. There is no way, at the moment, to stop AWS from adding the public DNS entries and so a way to solve this is to make sure that if the web server is accessed using the AWS subdomain then redirect to the main domain. Here is an example using Apache mod_rewrite of how to do this:
RewriteCond %{HTTP_HOST} ec2-123-456-789-012.compute-1.amazonaws.com
RewriteRule ^(.*)$ http://www.mydomain.com/$1 [R=301,L]
This can be put either in the httpd.conf file or the .htaccess file and basically says that if the requested host is ec2-123-456-789-012.compute-1.amazonaws.com then 301 redirect all URLs to the equivalent URL on www.mydomain.com.
This fix quickly stopped Googlebot from crawling our amazonaws.com subdomain addresses, which took considerable load off our servers, but by the time I'd spotted the problem there were thousands of pages indexed. As these pages were probably not doing any harm I thought I'd just let Google find all the 301 redirects and remove the pages from the index. So I waited, and waited, and waited. After a month the number of pages indexed (according to the site: command) was exactly the same. No pages had dropped out of the index.
Cleaning it up.
To help Google along I decided to submit a removal request using Webmaster Tools. I temporarily removed the 301 redirects too allow Google to see my site verification file (obviously it was being redirected to the verification file on my main domain) and then put the 301 redirect back in. I submitted a full site removal request but it was rejected because the domain was not being blocked by robots.txt. Again, this is pretty dumb in my opinion because the whole of the subdomain was being redirected to the correct domain.
As I was a bit annoyed with the fact that the removal request would not work in the way I wanted it to I thought I'd leave Google another month to see if it found the 301 redirects. After at least another month, no pages had dropped out of the index. This backs up my suspicion that Google does a pretty poor job of finding 301 redirects for stuff that isn't in the web's link graph. I have found this before, where I have changed URLs, updated all internal links to point at the new URLs and redirected the old URL. Google doesn't seem to go back through it's index and re-crawl pages that it hasn't found in it's standard web crawl to see if they have been removed or redirected (or if it does, it does it very, very slowly).
Having had no luck with the 301 approach, I decide to change to using a robots.txt file to block Google. The issue here is that, clearly, I didn't want to edit my main robot.txt to block bots as that would stop crawling of my main domain. Instead, I created a file called robots-block.txt that contained the usual blocking instructions:
User-agent: *
Disallow: /
I then replaced the redirect entries from my .htaccess file to something like this:
RewriteCond %{HTTP_HOST} ec2-123-456-789-012.compute-1.amazonaws.com
RewriteRule ^robots.txt$ robots-block.txt [L]
This basically says that if the requested host is ec2-123-456-789-012.compute-1.amazonaws.com and the requested path is robots.txt then serve the robot-block.txt file instead. This means I effectively have a different robots.txt file served from this subdomain. Having done this I went back to Webmaster Tools, submitted the site removal request and this time it was accepted. "Hey presto", my duplicate content was gone! For good measure I replaced the robots.txt mod_rewrite with the original redirect commands to make sure any real users are redirected properly.
Reduce, reuse, recycle.
This was all a bit of a fiddle to sort out and I doubt many webmasters hosting on AWS will have even realised that this is an issue. This is not purely limited to AWS, as a number of other hosting providers also create alternative DNS entries. It is worth finding out what DNS entries are configured for the web server(s) serving a site (this isn't always that easy but you can use your access logs/analytics to get an idea) and then making sure that redirects are in place to the canonical domain. If you need to remove any indexed pages then hopefully you can do something similar to the solution I proposed above.
There are some things that Google could do to help solve this problem:
- Be a bit more intelligent in detecting duplicate domain entries for the same IP address.
- Put some alerts into Webmaster Tool so webmasters know there is a potential issue.
- Get better at re-crawling pages in the index not found in the standard crawl to detect redirects
- Add support for site removal when a site wide redirect is in place
In the meantime, hopefully I've given some actionable advice if this is a problem for you.
Sunday, June 20, 2010
URL Rewrite Smack-Down: .htaccess vs. 404 Handler
Posted by MichaelC
First, a quick refresher: URL prettying and 301 redirection can both be done in .htaccess files, or in your 404 handler. If you're not completely up to speed on how URL rewrites and 301s work in general, this post will definitely help. And if you didn't read last week's post on RewriteRule's split personality, it's probably helpful background material for understanding today's post.
"URL prettying" is the process of showing readable, keyword-rich URLs to the end user (and Googlebot) while actually using uglier, often parameterized URLs behind the scenes to generate the content for the page. Here, you do NOT do a 301 redirection. (Unclear on redirection, 301s vs. 302s, etc.? There's help waiting for you here in the SEOmoz Knowledge Center.) |
301s are done when you really have moved the page, and you really do want Googlebot to know where the new page is. You're admitting to Googlebot that it no longer exists in the old location. You're also asking Googlebot to give the new page credit for all the link juice the old page had earned in the past.
|
If you're trigger-happy, you might leap to the conclusion that RewriteRule is the weapon of choice for both URL prettying and 301 redirects. Certainly you CAN use RewriteRule for these tasks, and certainly the regex syntax is a powerful way to accomplish some pretty complex URL transformations. And really, if you're going to use RewriteRule, you should probably be using it in your httpd.conf file instead.
The Apache docs have a great summary of when not to use .htaccess.
Fear Not the 404 Handler
First, all y'all who tremble at the thought of creating your very own custom 404 handler, take a Valium. It's not that challenging. If you've gotten RewriteRule working and lived to tell the tale, you're not going to have any difficulty making a custom 404 error handler. It's just a web page that displays some sort of "not found" message, but it gives you an opportunity to have a look at the page that was requested, and if you can "save it", you redirect the user to the page they're looking for with just a line or two of code. |
If not, the 404 HTTP status gets returned, along with however you'd like the page to look when you tell them you couldn't find what they were looking for.
By the way, having your own 404 handler gives you the opportunity to entertain your user, instead of just making them feel sorry for themselves. Check out this post from Smashing Magazine on creative 404 pages.
Having a good sense of humor could inspire love & loyalty from a customer who otherwise might just be miffed at the 404.
Here's an example of a 404 handler in ASP. Important note: don't use Response.Redirect -- it does a 302, not a 301!
For PHP, you need to add a line to your .htaccess pointing to wherever you've put your 404 handler:
- ErrorDocument 404 /my-fabulous-404-handler.php
Then, in that PHP file, you can get the URL that wasn't found via:
- $request = $_SERVER['REDIRECT_URL'];
Then, use any PHP logic you'd like to analyze the URL and figure out where to send the user.
If you can successfully redirect it, set:
- header("HTTP/1.1 301 Moved Permanently");
- header ("Location: http://www.acmewidgets.com/purple-gadgets.php");
And here's where it gets a bit hairy in PHP. There's no real way to transfer control to another webpage behind the scenes--without telling the browser or Googlebot via 301 that you're handing it off to the other page. But you can use call require() on the fly to pull in the code from the target page. Just make sure to set the HTTP code to 200 first:
- header('HTTP/1.1 200 OK');
And you've got to be careful throughout your site to use include_once() instead of include() to make sure you don't pull a common file in twice. Another option is to use curl to grab the content of the target page as if it were on a remote server, then regurgitate the HTML back in-stream by echoing what you get back. A bit hazardous if you're trying to drop cookies, though...
And, if you really need to send a 404:
- header('HTTP/1.0 404 Not Found');
Very Important: be careful to make sure you're returning the right HTTP code from your 404 handler. If you've found a good content page you'd like to show, return a 200. If you found a good match, and want Googlebot to know about that pagename instead of what was requested, do a 301. If you really don't have a good match, be sure you send a 404. And, be sure to test the actual response codes received--I'm a huge fan of the HttpFox Firefox plug-in.
Ease of Debugging
This is where the 404 handler really wins my affection. Because it's just another web page, you can output partial results of your string manipulation to see what's going on. Don't actually code the redirection until you're sure you've got everything else working. Instead, just spit out the URL that came in, the URL you're trying to fabricate and redirect to, and any intermediate strings that help you figure it all out. With RewriteRule, debugging pretty much consists of coding your regex expression, putting in the flags, then seeing if it worked. Is the URL coming in in mixed case? The slashes...forward? Reverse? Did I need to escape that character...or is it not That Special? |
You're flying blind. It works, or it doesn't work.
If you're struggling with RewriteRule regular expressions, Rubular has a nice regex editor/tester.
Programming Flexibility
With RewriteRule, you've got to get all the work done in the single line of regex. And while regex is elegant, powerful, and should be worshipped by all, sometimes you'll want to do more complex URL rewriting logic than just clever substitution. In your 404 handler, you can call functions to do things like convert numeric parameters in your source URL to words and vice versa. |
Access to Your Database
If you're working with a big, database-driven site, you may want to look up elements in your database to convert from parameters to words.
And since the 404 handler is just another webpage, you can do anything with your database that you'd do in any other webpage. |
For example, I had a travel website where destinations, islands, and hotels all were identified in the database by numeric IDs. The raw page that displayed content for a hotel also needed to show the country and island that the hotel was on.
The raw URL for a specific hotel page might have been something like:
/hotel.asp?dest=41&island=3&hotel=572
Whereas the "pretty URL" for this hotel might have been something like:
/hotels/Hawaii/Maui/Grand-Wailea/
When the "pretty URL" above was requested by the client, my 404 handler would break the URL down into sections:
- looking up the 2nd section in the destinations table (Hawaii = 41)
- looking up the 3rd section in the island table (Maui = 3)
- looking up the 4th section in the hotel table (Grand Wailea = 572)
Then, I'd call the ASP function Server.Transfer to transfer execution to /hotel.asp?dest=41&island=3&hotel=572 to generate the content.
Now, keep in mind that you'll probably want to generate the links to your pretty URLs from the database identifiers, rather than hard-code them. For instance, if you have a page that lists all of the hotels on Maui, you'll get all of the hotel IDs from the database for hotels where the destination = 41
and island = 3, and want to write out the links like /hotels/Hawaii/Maui/Grand-Wailea/. The functions you write to do this are going to be very, very similar
to the ones you need to decode these URLs in your 404 handler.
Last but not least: you can keep track of 404s that surprise you (i.e. real 404s) by having the page either email you or log the 404'ed URLs to a table
in your database.
Performance
For most people, the performance hit of doing the work in .htaccess is not going to be significant. But if you're doing URL prettying for a massive site, or have renamed an enormous list of pages on your site, there are a few things you might want to be aware of--especially with Google now using page load speed as one of its ranking factors. |
All requests get evaluated in .htaccess, whether the URLs need manipulation/redirection or not.
That includes your CSS files, your images, etc.
By moving your rewriting/redirecting to your 404 handler, you avoid having your URL pattern-matching code check against every single file requested from your webserver--only URLs that can't be found as-is will hit the 404 handler.
Having said that, note that you can pattern-match in .htaccess for pages you do NOT want manipulated, and use the L flag to stop processing early in .htaccess for URLs that don't need special treatment.
Even if you expect nearly every page requested to need URL de-prettying (conversion to parameterized page), don't forget about the image files, Javascript files, CSS, etc. The 404 handler approach will avoid having the URLs for those page components checked against your conversion patterns every single time they're fetched.
A Special Case
OK, maybe this case isn't all that special--it's pretty common, in fact. Let's say we've moved to a structure of new pretty URLs from old parameterized URLs.
Not only do we have to be able to go from pretty URL --> parameterized URL to generate the page content for the user, we also want to redirect link juice from any old parameterized URL links to the new pretty URLs.
In the actual parameterized web page (e.g. hotel.asp in the above example), we want to do a 301 redirect to the pretty URL. We'll take each of the numeric parameters, look up the destination, island, and hotel name, and fabricate our pretty URL, and 301 to that. There, link juice all saved...
But we've got to be careful not to get into an infinite loop, converting back and forth and back and forth:
When this happens, Firefox offers a message to the effect that you've done something so dumb it's not going even bother trying to get the page. They say it so politely though: "Firefox has detected that the server is redirecting the request for [URL] in a way that will never complete."
By the way, it's entirely possible to cause this same problem to happen through RewriteRule statements--I know this from personal experience :-(
It's actually not that tough to solve this. In ASP, when the 404 handler passes control to the hotel.asp page, the query string now starts with "404;http". So in hotel.asp, we see if the query string starts with 404, and if it does, we just continue displaying the page. If it doesn't start with 404;http then we 301 to the pretty URL.
Other References
Information on setting up your 404 handler in Apache:
- http://www.plinko.net/404/custom.asp
- http://www.webreference.com/new/011004.html
- http://www.phpriot.com/articles/search-engine-urls/4
Apache documentation on RewriteRule:
ASP.net custom error pages:
Technorati Tags
RewriteRule, 301, htaccess, 404 handlerWednesday, January 27, 2010
SEO Services Pakistan: Free Webinar: Getting to Know Open Site Explorer
Posted by great scott!
Last week we unveiled our newest toy, Open Site Explorer, to the world and the response was phenomenal. Now we want to take some time and really show everyone just what this powerful link analysis tool is capable of and answer your questions, so we're hosting not one, but two FREE Webinars this week (it's the same content, run twice to help accomodate schedules and time zones).The presentations will be 60 minutes each, 25 minutes of slides, followed by 35 minutes of Q+A on Wednesday, January 27th at 2:00PM (PST), and Thursday, January 28th at 10:00AM (PST) In each live webinar, Rand will show you around Open Site Explorer, offer tips and strategies for getting the most out of it, explain our new Domain Authority & Page Authority metrics, and answer your questions.
Here's the catch: each webinar is limited to 1,000 attendees. The last time we announced a webinar on the blog, we had over 3,000 people try to register in the first hour, so if you want to attend one of the live sessions, register quickly. If you can't make it, we'll have a recording of the presentation available in a couple of days on our webinars page.
Looooove Webinars and can't get enough of 'em? Then you should totally become a PRO Member! In the last couple of months we've started running regular webinars just for PRO Members and they've been really popular.
A slide from our December PRO Webinar on Link Building Strategies
A slide from our January PRO Webinar on SEO Strategies for 2010
In February we're stepping it up even more. In addition to our monthly educational webinar (February 4th on Analytics), we're adding a second monthly webinar where we'll be performing live site reviews of sites submitted by our PRO Members!
PRO Members can head over to the PRO Webinars page for more info on February's webinars, as well as recordings and slide decks from past webinars. If you'd like to join us for the next PRO Webinar--and possibly even get a live site review--sign up for PRO to access the PRO Webinar page for registration details or just watch your inbox for an invite.
Usman Farooq SEO/SEM Expert Pakistan
SEM Expert Pakistan: It's Only A Clique If You're Not In It
Posted by Dr. Pete
We all know how it feels to be on the outside looking in. You start out feeling awkward and a little envious, but slowly it turns into something worse – depression, resentment, even rage. Eventually, we find a group to belong to, and the tables turn. No matter how often we were excluded (and maybe because of it), we eventually start to exclude others. It's a vicious, if all too human, cycle, and it extends to every corner of our social interactions.
My Friends Are The Best
Just ask them; I'm sure they'll agree. Do we prefer our friends? Do we give them the best opportunities and accolades? Absolutely. This is more than bias, though; it's the simple reality of relevance. If you ask me who the "best" expert is in some niche of my own field or what the best article is on Topic X, I'm going to immediately draw from what I already know. Stating the obvious, I can't recommend someone or something that I don't even know exists.
Of course, there are times when we have a responsibility to dig deeper and look for the best candidates outside of our own limited realm of experience. When I was a graduate student at the University of Iowa, I had the opportunity to be the first student in my department to serve on a faculty search committee. One aspect of that experience that stuck with me was Iowa's affirmative action policy. It wasn't about numbers and quotas so much as a core philosophy that we had a professional obligation to search far and wide for the best candidate. We had the duty to leave our comfortable world of people just like us and venture into the world of "them".
Confirmation Bias
Beyond simple relevance is something more powerful, and sometimes more insidious. We all have a natural tendency to take sides, and, once we do, to find reasons why our side is right and the other side is wrong. Psychologists call this "confirmation bias," the often unconscious need to find data that confirms what we already believe. If we like someone, we'll find reasons to support them and give them the benefit of the doubt. If we dislike someone, we'll find reasons to be suspicious of everything they say and do. If you think confirmation bias is something only other people have, you're fooling yourself.
Choosing Sides
Beyond our friends, confirmation bias quickly begins to apply to all of our cliques and teams. If you're a sports fan, then that team mentality is usually just harmless fun – associating with your team provides a shared emotional experience. I'm a Cubs fan – believe me when I say that I understand the thrill of victory and the agony of defeat, although not in quite the ratio I'd like. What happens, though, when that team mentality starts to apply to things like politics, as we've seen far too often over the past couple of decades (on both sides of the fence)? Suddenly, our clique is 50% of the population, and our enemies are the other 50%. At best, it's divisive. At worst, it breeds hate, violence, and bigotry.
Where Do We Go From Here?
Of course, we all like to think that we're free from bias, but the power of bias is that the flaws that are obvious in others are often hidden and unconscious in ourselves. If I mention that I do SEO, do you picture a savvy internet guru or spam-spewing snake-oil salesman? If you're an SEO, and you hear that I work with SEOmoz, do you think I'm a paragon of white-hat virtue or part of Rand's evil conspiracy to take over the industry? Reality is probably somewhere in between. If I tell you that I voted for Obama, do you see a beacon of liberal hope or a Communist bent on destroying our nation? I can assure you that I am neither. So, how do we get past these labels and start to understand people, whether personally or professionally?
Get to Know People
Social media has given us a difficult dichotomy. On the one hand, it's never been easier to "friend" people in shallow and meaningless ways. On the other hand, we have the tools to get to know our peers and friends of friends in ways that were never before possible. The next time you friend someone, take a moment and find out something about them. Where are they from? What do they do? What kind of music do they like? Do they blog? If they do, read a post. If you see a label ("liberal", "conservative", "Twilight fan"), don't jump to conclusions. Give that person a chance to speak for themselves.
Play In a Different Park
It's easy to be self-righteous when you're surrounded by your fan-boys and girls. It's easy to get a standing ovation at your campaign rally when you only invite the people who gave you the most money. If you want perspective, you have to give up the home-field advantage. If you disagree with someone, comment on their post instead of running back home to write a rant. Try guest-blogging – even better, guest-blog in a different industry. Try to explain why SEO is worthwhile to an audience of small business owners, designers or UX professionals. It'll be a tough sell, but you'll learn a lot in the process.
When In Doubt, Ask
Social media is a mine field of misunderstanding – if you're not sure what someone means in that 140-character Tweet, ask them. If they write a blog post that seems like a personal attack, call them. It's not just about being nice – bad blood runs deep, and today's simple misunderstanding could destroy relationships and opportunities tomorrow.
Open Your Circle
We all remember the people who excluded us, and we too often hold that fact against the universe. Let it go. When you finally get into that circle, especially your professional circle, try to remember that someone else is still outside looking in. Here are a few ways to give someone else a chance, because we can all use a little good karma:
- Promote other people's links and awards, even the competition.
- If you're at a conference talking to a group and you see someone standing outside the circle with that awkward look of faux participation, invite them in.
- Make an introduction to help someone's career along.
- If someone is new to blogging, comment, subscribe, or even link to them.
- When someone challenges you publicly, listen and think before you counterattack.
- Don't envy other people's success – learn from it and improve.
- Every once in a while, shut up and listen.
At the end of the day, those of us who have attained some measure of success need to remember that we all had a little help along the way. Try to return the favor once in a while.
Photo licensed from iStockPhoto.com (Photographer: Hélène Vallée)
Usman Farooq SEO/SEM Expert Pakistan
Thursday, January 21, 2010
SEO Pakistan/ Information technology services Pakistan: Q & A About Using Q & A Sites to Build Your Business & Reputation
Posted by Gil Reich
This post was originally in YOUmoz, and was promoted to the main blog because it provides great value and interest to our community. The author's views are entirely his or her own and may not reflect the views of SEOmoz, Inc.
Q&A sites are a great way to get your message across and to build your brand and reputation.
How many people use Q&A sites?
- In a recent Business.com study, 49% of companies that use social media said they ask questions on Q&A sites. Only 29% said they use Twitter to find business-related information. The 49% doesn't even include the many who get info from Q&A sites by Googling or Binging.
- Answers.com (where I work) is now ranked (by comScore) as the 17th most visited site in the US. The vast majority of Answers.com's traffic is to user generated Q&A pages. Yahoo! Answers gets even more traffic. Much of your potential market is already getting their answers from these sites.
Source: Social Media Best Practices: Question & Answer Forums. Business.com, December 14, 2009, http://www.business.com/info/social-media-best-practices-q-and-a
What's in it for me?
Providing quality answers and links to relevant pages can help you in the following ways:
- Direct your customers (and potential customers) to accurate information about your product.
- Connect with people in your market, build your reputation, and generate leads.
- Provide links back to your site. Some of these links are Follow links, and thus also provide SEO value.
How do I use these sites?
The general rules of social media apply here too:
- Help others
- Build relationships
- Push your products and services when they answer somebody's question or request.
Q&A sites work great for this, because people are already asking the questions. When I blog I hope my posts address questions that my readers want answered, but they may not. In Q&A sites, your starting point is that somebody asked the queston that you're answering.
Specifically:
- Search the Q&A sites for questions about your subject, and browse the relevant categories.
- Answer questions fairly and accurately. If appropriate, mention your product or service, and / or link to a relevant page on your site.
- Follow up & interact where appropriate. Use these sites' message boards to see if you can be of further help, or to congratulate another contributor for a great answer.
- Fill in your User Profile, showing why people should like and trust you. You can also usually link to your site from your User Profile.
Usman Farooq SEO/SEM/SMM Expert Pakistan
Usman@technotera.com
In the example below, notice how the user provided a quality answer (much of which follows a template he uses in other answers as well) and adds a relevant link to his site.
What are the leading sites and how do they differ?
- Yahoo! Answers: The biggest site in the industry, with 47 million US visits in November according to comScore (and that's probably a very conservative estimate). It's a broad horizontal site. Questions are open for 4 days. Users answer the question, and vote on the best answer. The best answer is selected by either the asker or by the community.
- Answers.com / WikiAnswers: Answers.com has 41 million monthly US visitors according to comScore, making it second to Yahoo! but far larger than the other Q&A sites. It's also a broad horizontal site. It's key differentiators are:
- It's connectd to a reference site, so if you ask "What is the abstention doctrine?" your answer will come from West's Law and the Oxford University Press.
- It's a wiki, so instead of multiple users providing multiple answers, users collaborate on one answer.
- In most cases Answers don't get closed, so you can find questions asked more than 4 days ago and still contribute to the answer.
- LinkedIn Answers & Business.com Answers: These sites are great for more targeted communication, lead generation, and reputation building. Think of Yahoo! Answers and Answers.com as more B2C, and these sites as more B2B. This is Q&A in the context of advanced professional networking sites.
- Stack Overflow and its siblings: Stack Overflow is a great Q&A site for programmers. If you're a software developer and you want to establish yourself as an expert and to network with your peers, this site's perfect. The same technology is now powering other niche sites, most notably serverfault.com (for system administrators) and Answers on Startups, which Rand Fishkin just named one of the 10 Sources I've Come to Love.
- Aardvark: Aardvark is more of a closed system where you ask questions to people in your network. This is great for well connected journalists and bloggers to get answers from their network, but may not be ideal for spreading your message beyond your social circle.
How is using them like doing a guest post on SEOmoz?
Answering questions on Q&A sites is exactly like doing a guest post on SEOmoz:
- Find the sites where the people you need are getting their information.
- Give them quality information that will benefit them.
- Get your own message across, with full disclosure of who you are. You can be self-serving, but not too self-serving.
- Build relationships, and establish your expertise.
Ultimately you need a win-win here. You need to serve the needs of the community with whom you're interacting, in a way that also builds your business and reputation.
Where can I get more information on Q&A sites?
See the following excellent articles:
- Jason Falls: How to drive business leads with Q&A forums
- Using Yahoo! Answers to generate leads. Does it work?
- Lisa Barone: Finding Answers on Business.com
- Business.com's Study: Social Media best practices: Q&A forums
Or contact me (Answers.com user: Gilr)
Wednesday, January 20, 2010
SEO pakistan / SEM Pakistan: Using Yahoo! Answers to Generate Leads - Does it Work?
Posted by drummerboy9000
This post was originally in YOUmoz, and was promoted to the main blog because it provides great value and interest to our community. The author's views are entirely his or her own and may not reflect the views of SEOmoz, Inc.
Inspired by a great post by Vaidhyanathan, I began answering questions on Yahoo! Answers near the beginning of this year. Since then I have answered over 50 questions, nearly always related to metal roofing. Nearly 12 months later I sat down and did a study to find out:
Was the time I was spending answering these questions resulting in a reasonable amount of usable leads?
On to the data:
From my 53 answers, I received 562 visits, with a bounce rate of 33.5%, spending an average of 3:03 per visit.
First I compared the conversion rates of Yahoo! Answers traffic with PPC rates from the same period:
(Conversions are either a customer filling out and submitting a form for more information, or clicking on a link to our contact page.)Data this chart was made from is available as an Excel spreadsheet here.
As you can see, the conversion rates from Yahoo! Answers are nowhere near those coming from our PPC campaigns. That being said, to accurately understand how much this traffic is worth, you need to find out the cost of this traffic per visitor.
This data is available as an interactive Excel spreadsheet here.
The conversion rates for traffic from Yahoo! Answers are lower than those of PPC traffic, but the cost per visit is less. So to accurately compare them, we do some more number crunching:
(For privacy reasons I unfortunately can't give you all the PPC data I would like to.)
So basically, for Best Buy Metals, it makes sense to continue spending time answering questions on Yahoo! Answers.
Important tip:
Don't rush out to Yahoo! Answers and answer every question with a link to your website at the end!
Answer relevant questions in a non-spammy relevant way. Then include your website as a source at the end. You are the source, as a representative of your company, so this is not deceptive, and people don't mind it. You can check out my Yahoo Answers profile here.
Coming soon... How to use Yahoo! Answers in a way that benefits your company and the Yahoo! Answers community - The complete guide.
Usman Farooq SEO/SEM/SMM Expert Pakistan
Usman@technotera.com