Drybreeze
So this software is great.  I don't think there's too many people that would object to that in any major way.

But as seems to be recognised and discussed fairly often from what I've read, one of the biggest let downs is initial entry of data.  Brain can only recall what is entered, and entering is slow, time-consuming, and necessarily selective.  Especially if you want to tag or type things with any degree of detail.  And, ironically, the more detail you create the more effective it is later with recall... so it's a trade off and the chief constraint currently is the initial data entry.

I have played with the Android version of Brain released recently, and sadly it is a bit of a let down currently for a range of reasons.  However it is on the right track... moving this program to a mobile platform is definitely where the future lies.

My feature suggestion is a bit more than a feature... it's more of a core design direction.  I suggest to Brain publishers to take advantage of your position in the market to do what someone else eventually will if you don't.  Something that will make somebody a LOT of money.

I suggest you create an app that integrates thoroughly and intuitively into mobile devices to make it a snap for users to record events such as meetings, locations, pictures and sounds and automatically time-date stamp things and even classify them using GPS locations.  Using a form of currently available AI it is also possible to have the app be smart enough to know what the user is currently doing or about to do (for example if they're using maps to navigate, then the app would know that the intended location is important to the user and also likely a new location, and could then record events that happen at that location to be recalled later or dumped depending on homework done later through a "suggested brain entries" list or similar).

The aim here is simple - create an app that automatically or with minimal user input records sound, images and other relevant information and compiles it into the brain's plex.  Later this app can then easily recall events in perfect clarity, some years later if necessary.  This will then truly represent artificial memory that has perfect inarguable clarity, and THAT is where the money is.

Criminal encounters can be submitted to police, giving the user a great degree of security.
Agreements can be recalled and presented years after the agreement, putting an end to verbal agreements being worth little to nothing and making shams and cons much less likely, again giving the user a degree of safety.
Encounters with new business collegues or managers can be recorded and recalled later, so that instructions or suggestions or new ideas are recorded and able to be recalled perfectly.  No detail missed.
...You see where I'm going with this.
Much like a dash-board camera is currently used, except that it's mounted on the user, and the information being recorded is linked to other items in the plex such as businesses, people, tasks, events, etc in the more traditional Brain plex format.

Example:
User wants to visit a new shop they've heard about.  They enter into their navigation maps app the name of a shop titled "Abbey's Boutique, Melbourne" and their nav app duly locates and displays the location.
...Meanwhile Brain app perks up it's ears and knows the user is interested in this particular shop.  Firstly, it catalogues in a journal that user was interested in the app, and creates (according to rules set up by the user) a child thought of "locations" perhaps this location's name and some basic information that it can glean from the net about it, for example that it's a shop and the address of the location, or phone number, or similar.  This information is readily scraped from google, no new technology necessary.
So the user meanwhile tells the nav app to navigate to the location, which it does.
Now Brain app knows the user is en route to the destination, and again makes a journal note of this.  It might at this point prompt the user to ask if they'd like to make a sound recording or video log of the location when they arrive.  Perhaps the user would like to record a sound track of the encounter with the shop owner without being distracted by prompts when actually there.  The user selects a sound recording to be automatically recorded upon arrival.  Brain acknowledges and waits, tracking user's location until user arrives.
User arrives at the car park near the location, walks the final distance, and arrives at Abbey's Boutique, and Brain app begins recording sound silently at the same time as the nav app announcing the user's arrival. Still no new technology required.
The user browses through the shop, and finds an item that would make an excellent gift for their partner.  User photographs the item and price, and leaves the shop.  Brain stops the sound recording, and files the pictures away as children of the Abbey's Boutique thought created under the Locations thought card.
Later that evening when it suits the user, the user browses through temporary thoughts created by Brain that day, and confirms to keep (or re-links) thoughts.  User decides the sound recording was unnecessary after all (unlike the one taken three days ago that recorded a shop owner promising a full refund if they weren't happy with their purchase) and Brain forgets the sound recording and permanently records the images and links it to that location.  The user also enters a few other snippets of info to Abbey's Boutique thought such as "parking is easy", and also notes next to the pictures taken that they'd be a great gift for their partner, linking it to their partner's thought card.


So far this is a small leap from what Brain currently does, except that it includes a few GPS-inspired prompts and some active listening and smarts based on user activity - a bit like Siri does now.

However where the big coin comes in is in a few years time when mobile devices become far more integrated into normal life, like Google Glass broke ice on a few years ago.  When video can be activated on a device that doesn't need to be held up by the user then visual recording and later recalled with absolute clarity by being located within the plex based on user-set rules such as tags, types, children or sibling thoughts... however the user wants to record these things as per the current design of the plex software.

Creating software pre-emptively to do this now means that when worn video recording devices become more normal (or even currently with police for example) this software is in a position to be able to dominate the market.

Also, currently the software could record things like location names, and automatically present suggestions such as catalogs of the shop being visited, or integrate with friends' brain plexes to note that they too visit a particular bar quite often, and how often you've met up at it, the dates, and what was discussed while there 3 months ago with them.

When this technology becomes available (and it will) it will give users the edge by providing them with an impressive and essentially perfect memory.  One that can be shared on demand. People with this augmentation will have a distinct advantage.

The world will never be the same.
Everything is temporary.
Quote
mcaton
Drybreeze,

I enjoyed reading through your feature suggestion.  An automated activity tracker is an interesting leap from where TheBrain is now, but I certainly see the connection.  I'm going to write this up for our engineers to consider in future builds of TheBrain.  We have some "inbox" functionality that we are currently working on implementing that is related to your "end of day, temp. thought review."  You can look for this in the very near future.

Glad to hear that you are checking out the mobile version.  There is certainly more improvement in the pipeline for our mobile apps that is currently in development as well.  Stay tuned!

Thank you,
Matt
Quote
Drybreeze
I'm glad you found something interesting in it.
Because every user is different, I think it would be important that people can set up their own rules for how this activity tracker would function... so that one user that might not want to record every detail, or images, or perhaps at certain times (a work-related brain only for example, not personal life recording) can opt out of features or details that might interest a different user.

The inbox functionality sounds interesting, I look forward to seeing how it works.

The mobile version needs to be able to alter the view - I work almost exclusively in Outline view, while to mobile version seems to only cater for Normal view.
Everything is temporary.
Quote
pthompson
Drybreeze,

I'll make sure that different view options in our mobile applications is documented. 
Quote
PaddyVA
Oooh, Drybreeze, that sounds so great!  "I suggest you create an app that integrates thoroughly and intuitively into mobile devices to make it a snap for users to record events such as meetings, locations, pictures and sounds and automatically time-date stamp things . . ."

Zenrain has done a bunch of work using Workflow app, but it's still a pain to have to go to the Inbox and drag things out and arrange them.

Evernote still remains my main repository; wish getting things into TB from various places was so easy.

Pat
Quote

Add a Website Forum to your website.

Newsletter Signup  Newsletter Signup        Visit TheBrain Blog   Visit TheBrain Blog       Follow us on Twitter   Follow Us       Like Us on Facebook   Like Us         Circle Us on Google+  Circle Us         Watch Us on Youtube  Watch Us       

TheBrain Mind Map & Mindmapping Software     Download TheBrain Mind Mapping Software