Hot Koehls

The more you know, the more you don’t know

This content is a little crusty, having been with me through 3 separate platform changes. Formatting may be rough, and I am slightly less stupid today than when I wrote it.
05 Aug 2009

Archive your entire Twitter timeline

My code for displaying Twitter posts on your site is pretty handy, but it does have drawbacks. Each page load involves calling a remote URL, downloading a resulting XML file, and parsing the results, increasing your load times and using bandwidth. To minimize the impact, you can really only display a handful of the most recent posts. Plus, the downloaded stream is never saved. Google does index Twitter, but the thoroughness and benefit to you are subject to much speculation. We can solve both problems by locally storing and serving Twitter posts ourselves. Once you have them in your own system, you can display as many of them as you want without expensive external URL lookups. Plus, with the content centrally located on your site, getting Google to index and apply it to your rankings is straightforward. Note for SEO geeks:

Yes, I am aware that displaying and indexing Twitter posts on your own site does technically fall under the category of duplicate content, so save your typing. Given the disparate nature of Twitter content and the utter disconnect from my sites, I’m not too concerned about incurring a penalty for it. Your opinion and experience may vary. You should at least familiarize yourself with Google’s rules for duplicate content. If your paranoid, consider applying canonicalization to pages that display large portions of a Twitter timeline. Let’s get started

The end of the post includes a link to download all the code, as well as a link to a live demo. I am assuming that you’ve got a standard PHP/MySQL stack for your site, ideally running on Linux, super-ideally Debian (Digg uses it for a reason, you know). I am also assuming that you know how to use it; bring a decent understanding of SQL, PHP, and basic web programming. Here’s your first test: the demo assumes your PHP installation is version 5 and includes the Simple XML libraries. First, here’s the SQL INSERT command for the table that our example will use. Apply this to your database:


  `id` bigint(10) unsigned NOT NULL,

  `created_at` datetime NOT NULL,

  `source` varchar(255) NOT NULL,

  `in_reply_to_screen_name` varchar(255) NOT NULL,

  `text` varchar(255) NOT NULL,

  UNIQUE KEY `id` (id)


Now let's have a look at the class, which is the meat of the entire thing:
class Twitter {

  public function __construct($twitter_id) {

    $this->id = (int)$twitter_id;

  public function user_timeline($page, $count = '200', $since_id = '') {

    $url = '' . $this->id . '.xml?count=' . $count . '&page=' . $page;

    if ($since_id && $since_id != '') {

      $url .= '&since_id=' . $since_id;


    $c = curl_init();

    curl_setopt($c, CURLOPT_URL, $url);

    curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);

    curl_setopt($c, CURLOPT_CONNECTTIMEOUT, 3);

    curl_setopt($c, CURLOPT_TIMEOUT, 5);

    $response = curl_exec($c);

    $responseInfo = curl_getinfo($c);


    if ($response != '' && intval($responseInfo['http_code']) == 200) {

      if (class_exists('SimpleXMLElement')) {

        return new SimpleXMLElement($response);

      } else {

        return $response;


    } else {

      return false;


  public function rebuild_archive($your_timezone) {

    $orig_tz = date_default_timezone_get();


    $tz = new DateTimeZone($your_timezone);

    $sql = "SELECT id FROM twitter ORDER BY id DESC LIMIT 1";



     * execute $sql on your DB to get the latest twitter post

     * set the value of `id` to a variable named $since_id

     * set $since_id to false if the table is empty (i.e. a new install)


    $tweet_count = 0;

    for ($page = 1; $page <= 200; ++$page) {

      if ($twitter_xml = $this->user_timeline($page, '200', $since_id)) {

        foreach ($twitter_xml->status as $key => $status) {

          $datetime = new DateTime($status->created_at);


          $created_at = $datetime->format('Y-m-d H:i:s');

          $sql = "INSERT IGNORE INTO twitter

                    (id, created_at, source, in_reply_to_screen_name, text)

                  VALUES (

                    '" . $status->id . "',

                    '" . $created_at . "',

                    '" . addslashes((string)$status->source) . "',

                    '" . addslashes((string)$status->in_reply_to_screen_name) . "',

                    '" . addslashes((string)$status->text) . "'



           * INSTALLATION

           * Execute $sql over your DB here




      } else {




    $sql = "ALTER TABLE twitter ORDER BY `id`";



     * Execute $sql over your DB here



    return $tweet_count;




This method is a modified version of my previous `twitter_status()` function.
The big difference is that we're passing additional arguments to Twitter's user_timeline API call: **`count`** (specifies the number of statuses to retrieve) and **`page`** (specifies the page of results to retrieve).

This method takes the results from `user_timeline()` and places them in your DB. Its lone argument is the string representation for the timezone of your server. To find out what the string is and why you need it, just read the second post of my twitter series. For me on the US east coast, I use `'America/New_York'`.
Quick Warning**

Hopefully you noticed several large comment blocks with INSTALLATION in all caps: **I didn't include any code to run SQL over your DB**. Every system includes their own wrapper for database calls, including mine, so I'm not wasting time writing out SQL inserts using raw PHP functions that you'll just remove. Find the three blocks labeled "INSTALLATION" and follow the instructions to execute the list SQL.
Now we just need to run it.

$Twitter = new Twitter('12345678');


We instantiate the class and pass the ID number of our Twitter account. You'll find instructions on getting this number about halfway down my first post on displaying Twitter updates. After that, a single call to `Twitter::rebuild_archive()` will grab all available updates and store them.
If the ``twitter`` table is empty, it will grab your entire Twitter timeline, up to 3200 posts. If you have more than 3200 posts, you're out of luck for the time being, although I'd recommend you take a break from the computer, take a shower, and say "Hi" to the wife and kids.
After the first run, subsequent runs will only grab new posts by way of the API's since_id argument.
If you have the access, you can easily make this into a cron job:



Save that last block of code to a file, set it to be executable (chmod 755 usually), and set the job to run hourly. That top line identifies the interpreter that the system should use to read the file. You may need to change it to reflect the location of the PHP executable on your system.
Want to see everything described above in action? Check out the Developer's Diary on Fwd:Vault.
Don't worry about cut 'n paste, just download the zip file with the class and all the examples:

Twitter Archiver (.zip)
**Update 08-19-2009:** Removed references to function calls specific to my framework.
**Update 12-16-2009:** The `id` field has been bumped up to a BIGINT. Twitter ID numbers are bigger than what an unsigned INT field can hold.


comments powered by Disqus