Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • quartzjer 7:46 pm on May 1, 2012 Permalink | Reply
    Tags: , ,   

    Next steps for the Locker Project 

    Since we started this journey, the Locker Project and Singly have progressed side-by-side, with Singly as the hosted experience that sponsors the Locker Project.

    After opening Singly’s Locker hosting to developers and getting lots of feedback on all of the possibilities that a hosted Locker could enable, the resounding theme was that developers want to use the API first and foremost.  Based on that, the Singly team has been working on an effort to bolster the API aspects of the codebase so that it can support apps at a large scale.  There’s been excellent progress in creating a cross-platform, cross-service API that provides merged, normalized and de-duplicated data on which apps can be built.

    So now there is the question of how best to support and improve the open-source Locker Project, since it encompasses more than just an API.  It’s an effort everyone in this community cares deeply about and has helped create a shared vision for, so we’re calling upon you to help decide what the most powerful direction for the Locker Project should be.

    Please help take the time to fill out this survey, and most importantly, thank you all for your ongoing support and hard work!

     
  • quartzjer 1:01 am on January 22, 2012 Permalink | Reply  

    Code overhaul of the “map” 

    This last week landed one of the larger sweeping updates to the locker core codebase, cleaning up a mountain of tech debt that had crept in over the last year in how the various services (apps, collections, and connectors) are loaded+handled and also a solid step forward towards our Architecture Initiatives.

    It’s one of the most exciting advancements yet since it’s enabling a very dynamic and rich growth of the apps and connectors available: everything is installed on demand using npm into a running locker. The core router manages the process so that there’s almost no friction for developers creating new experiences or connecting additional sources of data.  Locker owners now always have available instantly the best apps being created.

    There’s some more detail on the wiki’s Registry page for those doing or planning any development to help with local testing out, it enables a local dev locker to pick up changes live using the upsert command for faster turnaround and more flexible dev patterns.

    Also, we’ve nearly got node v0.6 support in master finally, some “me” profile support to make it easier for collections to differentiate produced vs. received data, and some music and fitness connectors showing up! 

     
    • Duane Johnson 10:10 pm on January 27, 2012 Permalink | Reply

      Way to go guys! Thanks for writing these updates as you go along. I’m itching to dive in but waiting for the barrier-to-entry threshold to be low enough that I believe it becomes an effective use of my (part) time. So these blog posts are very helpful to gauge whether or not the time is right for me. And after this post, it almost feels right :)

    • quartzjer 11:08 pm on January 27, 2012 Permalink | Reply

      Thanks! The barrier is dropping fast, we’ll def keep updating :)

    • Akshaya 1:06 pm on March 28, 2012 Permalink | Reply

      Hi, does locker code available @ GitHub supports Node v0.6 now?

      • Matt Zimmerman 3:35 pm on March 28, 2012 Permalink | Reply

        Yes it does!

        • Akshaya 3:37 pm on March 28, 2012 Permalink

          OK thanks! Then there’s big need to update the documentation coz it still says v0.4.9.

  • quartzjer 11:36 pm on January 3, 2012 Permalink | Reply  

    Architecture Initiatives 

    There is so much incredible progress lately in the codebase and a tremendous amount of fun stuff yet to do, that I thought it would be helpful to outline for everyone some overall architecture thoughts of my own that help guide my excitement and interest :)

    Data Storage

    Include tools for the codebase manage data storage internal to itself and expose utilities for the owners to control and add storage endpoints:

    • visible to owner (where is my stuff stored)
    • ability to connect storage endpoints:
      • dropbox
      • s3
      • wordpress
      • personal desktop/laptop
      • google/icloud/amazon?
    • have the locker incrementally back itself up to the connected storage
    • let apps have the ability to direct raw copies of things (like photos) to any storage endpoint

    TeleHash

    This work has been ongoing for a while and is now finally going to start surfacing some basic functionality, a new lightweight node implementation is in progress with the intention of being integrated into the locker core for uses described below.

    Identity & Sharing

    Have the locker core manage and be aware of the profile data of every connected identity/service, and then enable the signing of those identifiers so that they can be shared via TeleHash in a verifiable way.  Enable apps to request connection/sharing methods with anyone that will have a level of verification/trust based on this, with direct encrypted verified peer-to-peer sharing being possible.

    • improve the contacts collection with edit-ability features
    • build basic permissions system for sharing things in your locker with others
    • experiment with caja/capabilities-based access logic (share bits of code)
    • include safe/simple experiences explaining what is being shared, how secure, and to who
    • strong access logging of anything shared

    Routing Mesh – Core as a personal VPN

    The core ideally goes away entirely, leaving each component of the locker to attach to a shared mesh that knows how to send/receive with each other.  This enables the locker to be essentially a personal VPN with all the components that talk P2P using REST+JSON, across all of your personal devices and any hosted pieces.  This builds on TeleHash, key management, pairing, network path detection, and building out supporting software for all the desktop/mobile platforms, a lot!


    You can find (and contribute to) even more in the original raw brain dump style document on google docs. Looking forward to an incredible 2012!

     
  • quartzjer 6:11 am on December 20, 2011 Permalink | Reply
    Tags: ,   

    Cranking away! 

    We’ve not used this blog very much recently but would love to get back into some regular use of it again :)

    In catching everyone up, the codebase has been moving *very* rapidly (https://github.com/LockerProject/Locker/network) and we’ve got a singly-dev mailing list starting to get a little traffic (http://groups.google.com/group/singly-dev).

    There’s a lot of open pull requests (https://github.com/lockerproject/locker/pulls) landing this week as well, including a new search backend (don’t need clucene and cmake anymore, uses sqlite’s full-text-search which npm fully installs), soundcloud, gowalla, a generic synclet auth system, a timeline collection, and some post-to-service experimental support!

    Hope to see you all soon in #lockerproject on freenode, on the mailing list, twitter, here, or in a pull request! ;-)

     
  • ctide 12:48 am on August 19, 2011 Permalink | Reply
    Tags: , ,   

    Synclets 

    Temas touched on this a bit in his last blog post, but I wanted to write a more detailed explanation of where things are at for synclets.  The short synopsis of synclets is that they are a simple way to pull data from a provider and feed it into the system.  Synclets are basic routines that are fed authentication keys and some configuration and uses that information to pull down data from the provider which is funneled back into locker core via JSON.  They are a drastic reduction in scope for what’s currently required for a connector.  Adding new connectors currently relies on the developer managing a lot of pieces (authentication, running a web server, processing data from the source, feeding that data into Mongo, generating correct events) and since the majority of those pieces are fairly common across all of the connectors, we want to drive towards providing a system that manages these common components.

    The first step towards this end was to convert the stable connectors into packages composed of synclets being managed by the common code.  This has been completed (we’re still working on cleaning up some of the patterns to get them right, but we have a start checked in that we’re playing with) and they are now ready for people to start poking at.  The current plan is to eventually migrate all of the connectors (and all the data your connectors have been collecting!) to synclet powered versions.  For the time being, existing connectors won’t be affected by any of the work being done around synclets.

    The missing pieces that will tie this work together and make it easy for developers to implement their own synclets mostly surround authentication and creating a UI for managing installed synclets.  We will be implementing something like everyauth to manage authentication for synclets at some point in the near future.  This will enable us to simplify both the UI, and the implementation, for authentication and allow us to provide authentication keys to any number of synclets that would like to pull data from a provider.

    What does this mean to developers hacking on connectors today?

    Not a whole lot just yet.  We want to spend some more time ensuring that we have this pattern right, and connectors will continue to exist exactly as they are today during that period.  If you’re feeling super adventurous, feel free to poke through some of the code in lsyncmanager, and look at the new synclets that are now provided with the connector code for Facebook, Twitter, Github, Foursquare, and Google Contacts.  Once the authentication and UI pieces have been baked into the project, we’ll write a more detailed post describing exactly how you would use those pieces to build synclets.

     
    • mr. sync 12:03 pm on September 7, 2011 Permalink | Reply

      By when can we expect this to be available.

      • mr. waiting 4:09 pm on November 29, 2011 Permalink | Reply

        And what about TeleHash and interconnecting lockers ?

    • ctide 5:09 pm on September 7, 2011 Permalink | Reply

      Hi,

      It’s already partially in there, but we’ve descoped the auth frontend piece for now. I’d imagine it’s still a month or two in the future, but we’ve been adding in more synclets. If you take a look at: https://github.com/LockerProject/Locker/commit/0e65f2f4764b5453048f5ec7efcc91fbb66f58b5 this is the guts of adding a new synclet (flickr, in this case.)

      • mr. waiting 4:06 pm on November 29, 2011 Permalink | Reply

        Any progress at this front?

  • temas 2:20 am on August 16, 2011 Permalink | Reply
    Tags: ,   

    Another fast week but we stayed super focussed… 

    Another fast week, but we stayed super focussed.

    Synclets

    These are a break out from the connector work to try and figure out better patterns around collecting data from outside sources. If you are a connector author then you definitely want to check out the code and give the wiki page a read.

    Search

    We’ve been exploring different patterns of using search within the Locker, which Eric wrote about, and we’ve finally got a new bit working. The CLucene implementation has come alive! It’s still basic, but it builds a single query point for everything in your Locker. You can try it out with the Locker Search app, but it still needs some UI love.

    Photo Viewer

    Tom just landed (literally as I’m writing this!) an experimental photo viewer that looks fantastic. So go get your photo connectors setup and quickly browse them with this great viewer.

     
  • ethomjenn 10:35 pm on August 12, 2011 Permalink | Reply
    Tags: ,   

    A Deeper Dive on Locker Search 

    For the last several weeks, a couple of us on the Singly team have been taking a deep-dive into the idea of search within a Locker.  As most people already know–and as we were quick to discover, search seems simple but can become large, difficult, and unweildy if not done with care and attention.  We wanted to share a couple of the thoughts we had around these last few weeks, and let you know of some cool things we discovered.

    Discovery 1:  Lucene is a pretty amazing information retrieval library

    Lucene is an open-source Apache project that handles information storage and retrieval, specific for the uses of searching and information retrieval.  It is actually just a library, however, meaning we can’t easily slide it into the existing Locker codebase.

    A little detail about Lucene, and why it’s so powerful.

    • Lucene implementations have scaled up to the terabyte range of storage.  This fares really well as each user’s Locker may grow quite large over time, and we don’t want to outgrow an architecture if possible.
    • Lucene has clearly-defined concepts of text processing, both when indexing–placing information into the system, and when querying–pulling information out of the system.
    • Lucene also has some subprojects that may turn out to be very helpful for Locker users. One in particular is Tika, which can extract the textual contents from binary files such as PDFs and Word documents.  This is super helpful if, for instance, you wanted to not only search across your entire Gmail account, but also wanted to search within any attachments others have sent to you.
    • The Lucene library has been battle-hardened over several years of active development and lots of implementations in production.  Some very important thoughts have emerged around scaling Lucene, based on actual users pushing it to its limits.  This gives us a lot of useful information, and shows us what not to do when adding Lucene to Lockers.

    In short, we were pretty sold on Lucene as the search platform of choice.  However, we weren’t ready quite yet.

    Discovery 2:  Lucene has lots of implementations, and each of these has tradeoffs

    If only we could download and install a simple library and have full-search Lockers that easily!  No, unfortunately, there are several implementations of Lucene, and we want to choose the best based on the unique requirements of Lockers.

    First, we tried elasticsearch, which is a very robust implementation of Lucene, along with capabilities to allow it to scale to MASSIVE sizes.  It also has a very robust method for indexing and querying the information store from pratically any client.  We prototyped this one first, knowing that it was large, and probably overkill, but we could begin to get search results in a day or two, because of its maturity.

    Sure enough, a couple days later, we were searching across the Locker, as you can see here:

    Screen_shot_2011-08-12_at_3

    However, this required every Locker user to run a full instance of elasticsearch alongside their Lockers, which is a heavy requirement. However, we loved how easy it was to get going with useful searching, so we committed that code as the Locker app called “Search”, and forged ahead.

    Discovery 3:  Lucene isn’t only a Java library

    How to get Lucene embedded within the Locker?  That was our next task.  We wanted something lightweight, fast, and embedded so we didn’t have to add yet another external requirement.

    That’s when we learned that a group had ported the Lucene library to C++, and the project’s called CLucene. This was fantastic news for us, as we figured we could just wrap clucene into a native Node.js module, and it’ll be extremely fast and small, but still retain the power of Lucene.

    Discovery 4:  CLucene looks great, but it’s based on a much older Lucene version

    Daaaaamn.  We were so close, and it turns out that CLucene is missing some of the features we really wanted to use from current Lucene implementations, specifically geopoint support and updating existing documents.  Now we needed to look closely at CLucene to see how much we need to implement.

    But first, like most of our projects at Singly, we applied the mantra “Make it work.  Make it right.  Make it fast.”  So before getting it as full-featured as we desired, we first just wanted to make it work inside the Locker natively.  This turned out to be a good learning experience for us, as we now know how to write native Node.js modules, and that opens up the massive amounts of C and C++ libraries out there to us.

    Enough technobabble, let’s see what this sucka can do!

    Okay!  Here are some queries you can run on your Locker:

    • “eric”  – Find everything that has the term “eric” in it, case-insenitive
    • “eric~” – Do a fuzzy search for “eric”, meaning be a bit forgiving about misspellings.  This will return back data that contains terms like “erick” and “erik”, as well as “eric”.
    • “+lockerproject singly” – Do a search for data that definitely has the term “lockerproject” in it, but may or may not have the term “singly” in it.
    • “+lockerproject singly^5″ – Do the same search as above, but for those records that do have the term “singly” in it, boost those scores much higher, so they show up closer to the top of the search results.

    What’s next?

    Well, this is where we need to add functionality to CLucene that it did not already have.  Specifically, we need to be able to add all the different fields that are internal to Locker records.  For instance, for a record representing one of your contacts, you should be able to search for first name separately from last name.  Right now our Node.js CLucene module does not support that.

    We also want to add back in the ability to update existing records, so your data is always searchable and retrievable, regardless of how many times you resync your connectors.  Also, being able to search distances from a geo location such as a Foursquare checkin is very high on our list as well.

    We’re working hard to bring useful features to Lockers across the web, and search is going to be a core part of that.  We are always happy to hear your thoughts or answer any questions you may have.  We are regularly available in the Freenode IRC room #lockerproject, as well as via e-mail and github.  We welcome your feedback!

     
  • temas 3:09 pm on August 9, 2011 Permalink | Reply
    Tags: ,   

    Tech Update #3 

    It’s been a long while since we got an update written, but it’s been a super busy couple of months.

    Dashboard

    We’re continuing to work towards the best dashboard interface possible and have made some more tweaks to the layout. We’re working on the overall first run experience and expect that code to start landing soon.

    Photos

    The photos collection has started to have some real meat thrown on its bones. The Twitpic and Facebook connectors are feeding it all your lovely photos that you can currently view in the Hello Photos app. It’s a bit raw still, but it’s working and plenty more exciting visualizations coming in the pipe.

    Screen_shot_2011-08-09_at_9

    OSCON

    OSCON Logo

    Jer, Eric, Matt and myself (temas) attended OSCON and it was a fantastic and productive trip. I’ll let Eric’s writeup give you the details.

     

    Lockerballs

    Part of our OSCON prep was to create a single precompiled tarball that everyone can use to quickly get started with the Locker Project. They have been dubbed Lockerballs and can be found on the website. A giant shoutout to Forrest for the work he put into getting them done and maintained. The Lockerball is a clone of the git repository so once you are ready to start hacking you can git pull and join the fun.

    The plan is to migrate to full virtual machine images in the near future.  We’re working on cutting out as many roadblacks as possible so more people can be testing and working with us.

    Synclets

    On the forefront of new code we have Synclets. These are a break off from core connector code to simplify and contain the actual syncing logic. These are a great move to a clean and ordered system that is headed up by Chris and Jer. If you’re writing connector code check them out and get us feedback to help their early development.

     

     
  • ethomjenn 5:31 pm on August 3, 2011 Permalink | Reply
    Tags: conference, , OSCON   

    Thoughts Around the Locker Project and OSCON 

    Last week some of the Singly crew headed to Portland, Oregon for O’Reilly’s OSCON.  Our primary goal was to get Lockers in the hands of as many people as possible, by handing out USB thumbdrives with instances of the Locker codebase on them.  Also, Jeremie’s talk on the Locker Project was on our agenda.  However, there were a couple of other things that got us excited about heading forward.

    The Concept of Xen micro instances for running Lockers:

    We spoke with Jeremy Fitzhardinge from XenSource/Citrix, regarding running Lockers in micro instances of Xen.  Through a lot of discussion, it sounds like it’s at least feasible in theory that very small Xen instances could indeed be used.  We also discussed the possibility of running Xen inside of Xen (to run Xen instances on a platform such as AWS), but it appears that this doesn’t peform well, understandably.

    Plug Computers:

    I attended a session that talked about the current state of small “plug” computers–defined as those types of computers that are small enough to simply plug into a wall.  These usually range from very small thumb-drive sizes up to a large wall wart or slightly larger.  The DreamPlug was one being demoed, and could be very interesting as a platform to use as a Locker appliance.  The good news is that besides the various networking capabilities of the DreamPlug, it also has an eSATA port on it, meaning it’s simple to plug in a large, high-performance hard drive for lots of storage.  Storage tends to be one of the weakest aspects of plug computers, and being able to add storage as necessary makes this very interesting.  Its price, when released, is said to be around $99.00 USD.

    Here are the DreamPlug specs:

    • CPU – Marvell Kirkwood 88F6281 @ 1.2GHz speed
    • Linux 2.6.3x Kernel
    • 512MB 16bit DDR2-800 MHz
    • 2MB SPI NOR Flash for uboot
    • 2 GB on board micro-SD for kernel and root file system
    • 2 x Gigabit Ethernet 10/100/1000 Mbps
    • 2 x USB 2.0 ports (Host)
    • 1 x eSATA 2.0 port -3Gbps SATAII
    • 1 x SD socket for user expansion/application
    • WiFi 802.11 b/g
    • Bluetooth 2.1 + EDR
    • Audio Interfaces
    • 5V3A DC power supply

    Another plug computer that came across the radar is the Raspberry Pi.  This is a tiny computer, yet is ARM-based and can run Linux.  It’s does not have a lot of storage support other than SD cards, but could be extremely flexible in being able to run one or more connectors and stream data back to a central Locker instance.  I can see this being used with the Arduino stack to get cheap, realtime sensor data from the environment and into Lockers.  Its price is $25.  (Cheaper than the Arduino Uno dev board!)

    Here are the Raspberry Pi specs:

    • 700MHz ARM11
    • 128MB of SDRAM
    • OpenGL ES 2.0
    • 1080p30 H.264 high-profile decode
    • Composite and HDMI video output
    • USB 2.0
    • SD/MMC/SDIO memory card slot
    • General-purpose I/O
    • Open software (Ubuntu, Iceweasel, KOffice, Python)

    Handling Locker Storage Safely and Quickly:

    How to safely scale and store Locker data has been an ongoing discussion within the Locker Project.  Several options are on the table, and a lot depends on the requirements at hand.  For instance, for Singly-hosted Lockers, we will need something that we can host on insecure platform-as-a-service providers such as Amazon AWS.  For this, something like Tahoe LAFS looks like a great contender.

    Other requirements that span both Singly-hosted Lockers and self-hosted Lockers are the capabilities of authorization to access, proof of change of data, and versioning of previously-changed data.

    So it was interesting to have met up with Brad Fitzpatrick from the Memcached/Danga/LiveJournal world, to chat with us about his new project Camlistore.  He gave us a very quick rundown on various features.  One of note is the key signature of any changes made to the filestore, which allows easy confirmation of who changed what and when.  As it was described to me, it sounds like this same functionality also allows only approved users to view sets and/or subsets of the data.  Lastly, the versioning support that it has could prove to help us with our versioning issues as well.

    One use case I found quite compelling was Brad discussing the possibility for a Camlistore (and by extension, a Locker) user to provide a subset of data that other people can write to.  For instance, you could provide a set of photos that you took while attending a party, provide write access to those particular photos to a group of friends who were also at that party, and those other users could add data–such as comments or tags–to your dataset, all using the Camlistore functionality.  This peer-enrichment idea is super exciting to me.

    Locker-wide Search:

    I met up with Tyler Gillies, who wrote the node-lucene module for Node.js.  This module wraps up the CLucene library and exposes it to a Node instance.  CLucene, for those who aren’t familiar with it, is a C++ port of the Java Lucene library–itself a very mature and powerful information retrieval library.  Internally, Singly has a Locker-wide search application already running, but it’s a prototype and requires the Elasticsearch Lucene implementation to run.  Elasticsearch is amazingly powerful, but it is also very resource-intensive.  We need something smaller, leaner, and faster, and CLucene fits the bill well.

    The last night we were there, Singly put in a Locker BIrds of a Feather session where a bunch of us hacked on Locker things together.  I was able to work with Tyler to get up to speed on the state of node-lucene, and to begin contributing back to it with some more features we’ll need for full Locker search.  I’m super excited to get advanced search capabilities in the Locker soon, such as the following:

    - Find all of my contacts from the contacts collection that have the e-mail address containing the term “singly.com”

    - Find all of my geographic data from any connector or collection for the user “Eric Jennings”

    - Find that link I visited recently that talked about the Google V8 garbage collection method, can’t remember if I read it on my phone or on one of my machines

    If we do search right, any of thes types of search queries will be available to any application, collection, or connector.

    That’s it for the OSCON trip.  Several of us were able to catch up with people we hadn’t seen in years, or have never met other than in IRC.  For those we met, it was great meeting you!  And for those who weren’t able to go, we hope to meet up soon!

     
    • jsgf 9:06 pm on August 4, 2011 Permalink | Reply

      “We also discussed the possibility of running Xen inside of Xen (to run Xen instances on a platform such as AWS), but it appears that this doesn’t peform well, understandably.”The problem is more that I don’t think AWS supports the right virtualization modes to even make it possible.But if you want to host on EC2 instances, then you can just use an instance outright, without needing to worry about nesting. Like any of the hosting options, it has its ups and downs, of course.

    • Duane 9:03 pm on March 27, 2012 Permalink | Reply

      The Tonido Plug 2 also looks like a great platform. It has basic file search on top of a shared filesystem built in, plus the ability to add “apps” that work on top of the data.

      http://www.tonidoplug.com/tonido_plug.html

  • nymbot 7:50 pm on August 1, 2011 Permalink | Reply
    Tags: , personal data,   

    Survey: What is your most important personal data? 

    Tlpsurvey

    Over at Singly, we were curious what types of data people most cared about, and why. As an experiment, we surveyed 179 people, at a cost of twenty cents per person. Filling out the survey took on average 2 minutes and 13 seconds, meaning the effective hourly rate was $5.414. I would have expected more results if I offered more money for the survey, but this was just a small experiment.

    The question was “What is your most important personal data?” users had to pick either:

    • Contacts
    • Messages
    • Events
    • Check-ins
    • Links
    • Photos
    • Music
    • Movies
    • Browser History

    Not surprisingly, Photos won at a whopping 67 votes (37%). Here are the full results:

    Screen_shot_2011-08-01_at_12

    What was interesting is that people ranked Browser History, Links, Events, and Check-ins so low. Do people not care about where they went? Is this data considered stale to most people, and therefore irrelevant? I personally believe I can create a lot of value from Browser History and Check-ins. For example, what websites are my friends going to that I’m not? Also, what places should I be going that I’m not? These are just a couple of ideas.

    This survey validates what I believe we already knew, that Photos, Contacts, and Messages were important collections to create. More importantly though, the second question “Why is that data the most important to you” explains *why* people feel are so passionate: (word cloud from all answers)

    Cloud

    Photos

    “Photos are most important to me because they preserve mine and my family’s history. They are a way to show younger generations deceased family members they may have never gotten an opportunity to meet. The record both happy and sad memories, which are a part of every family’s existence.”

    “Many of my photographs span back from childhood, thus they’re irreplaceable. Every other option on the list is unimportant, as they’re replaceable and don’t hold sentimental value.”

    Contacts

    “My contacts are the most important data because without then i can’t reach my family and my friends. I have all my costumers contacts and if i loose those contacts i can’t work. I make regular backups of my phone and email contacts.”

    “Friends and family are important to me.”

    Messages

    “These days we all do most of our work through electronic communication. So most of my transaction details, confirmations, decisions taken, and documents, come and go through email. So messages are most important.”

    “It’s important to me because I am a human being, and like every human being before me and those to come after me, I have secrets. These can be in the form of affairs, a half life (think LGBT) and other more sinister varieties. Secrets that no one but the people I choose, should know about, this is why this personal data is the most important to me. Data which if exposed, parts of my well built artificial life to satisfy a conservative society and family would come crashing down around me. I am presuming you also mean emails when you say messages. This is why this data is the most important for me. I hope I have been able to help you, thanks.”

    If you would like to read on, the raw data is available at: https://spreadsheets.google.com/a/singly.com/spreadsheet/ccc?key=0Ak8IPGG6Z4dOdC1ROHQwRkhUMEtPQTNkNXFoMnQzcEE&hl=en_US#gid=0

     
    • Fritz Müller 2:32 pm on August 3, 2011 Permalink | Reply

      You forgot to ask for personal video files. Lots of people use video cameras and film important events.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.