Monday, October 3, 2011

Adobe Unveils Six New Apps for the iPad, Including Photoshop Touch

At its MAX 2011 technology conference, Adobe today announced Adobe Touch Apps, a family of six, intuitive touch screen applications, designed for Android tablets and Apple iPad that enable anyone to explore ideas and present their creativity anytime, anywhere.

Inspired by Adobe Creative Suite software, these stunning new apps bring professional-level creativity to millions of tablet users. The apps address multiple areas of the creative process: image editing; ideation; sketching; mood boards; website and mobile app prototyping; and presenting finished work. They are headlined by Adobe Photoshop Touch, a groundbreaking app that brings the legendary creative and image-editing power of Photoshop to tablet devices for the first time.

Available soon as standalone apps, Adobe Touch Apps are essential components of Adobe Creative Cloud, a major new company initiative also announced today (see separate release). Adobe Creative Cloud will become a worldwide hub for creativity, where millions can access desktop and tablet applications, find essential creative services, and share their best work. Files created via Adobe Touch Apps can be shared, viewed across devices or transferred into Adobe Creative Suite software for further refinement – key features of Adobe's vision for the Creative Cloud. With stylus capabilities expected to become a key feature on some next generation tablets, Adobe Touch Apps are designed to work with both finger and stylus input.

The new Adobe Touch Apps include:

● Adobe Photoshop Touch lets users transform images with core Photoshop features in an app custom-built for tablets. With simple finger gestures, users can combine multiple photos into layered images, make popular edits and apply professional effects. The tablet-exclusive Scribble Selection Tool allows users to easily extract objects in an image by simply scribbling on what to keep and then what to remove. With Refine Edge technology from Adobe Photoshop, even hard-to-select areas with soft edges, such as hair, are easily captured when making selections. Additionally, the app helps users quickly find images, share creations, and view comments through integration with Facebook and Google Search. Using the syncing capabilities that are a component of Adobe Creative Cloud, files can be opened in Adobe Photoshop.

● Adobe Collage helps creatives capture and refine ideas and concepts by allowing them to combine inspirational images, drawings, text and Creative Suite files into modern, conceptual mood boards. Features include importing of images, four customizable pen types for drawing, adding text, and applying color themes. A virtually unconstrained canvas grows as needed to accommodate more assets. Files can be shared or transferred for access in Adobe Photoshop.

● Adobe Debut allows users to present designs to clients and stakeholders virtually anywhere. Adobe Debut quickly opens tablet-compatible versions of Creative Suite files for convenient and beautiful viewing on the tablet, including Photoshop layers and Illustrator art boards. Feedback is gathered using a markup pen tool to add notes and drawings on top of the work.

● Adobe Ideas is an easy-to-master, vector-based tool for drawing. By using a stylus or finger, strokes appear smooth at any zoom level. Starting with a blank canvas, users can choose color themes, and pull in tablet-compatible image files that can be controlled as separate layers. Finished results are easily accessed in Adobe Illustrator or Photoshop.

● Adobe Kuler makes it easy to generate color themes that can inspire any design project. Color can be explored and discovered, with hundreds of thousands of Kuler themes already available via the creative community. Social engagement in the community is enhanced by rating and commenting on themes, which can be exported as color swatches for Adobe Creative Suite projects.

● Adobe Proto enables the development of interactive wireframes and prototypes for websites and mobile apps on a tablet. Ideas are communicated and shared with teams and clients using a touch-based interface. Gestures quickly express a design concept, explain website structure or demonstrate interactivity. The wireframe or prototype then can be exported as industry standard HTML, CSS and JavaScript, and shared in popular browsers for immediate review and approval.

Adobe Touch Apps build on the launch of Adobe Carousel, the only photography solution that gives access to your entire photo library across your tablets, smartphones and desktops - no storage issues, no manual syncing hassles. Enjoy all your photos anywhere you are, and make them look terrific using the same powerful photo-processing technology as Adobe Photoshop Lightroom software.

Pricing and Availability
Adobe Touch Apps will be available for Android devices in November 2011. Adobe expects to make an announcement regarding iOS availability in early 2012. Adobe Ideas is already available for the iPad. Introductory pricing is US$9.99 for each app. Access to the file viewing, sharing and transfer functionality of Adobe Creative Cloud is included in the price of each Adobe Touch App. Details regarding pricing of the Adobe Creative Cloud and its expanded capabilities around applications, services and community will be announced in November 2011.

*thanks iclarified*

Send us a story or tip @ and follow our pages for the latest limera1n, rubyra1n, and all tech stories, follow us on Twitter at @iphonepixelpost or @limerain_com
And like our Facebook page
- Posted using my iPhone 4

iCloud wizard shows up in 10.7.2 Server

Reader Drew sent along these images of an iCloud setup wizard that came up after the latest 10.7.2 installation on Mac OS Server. We haven’t seen these before so we are posting:

It is notable that these appeared in Server rather than the client setup. This is likely what users who upgrade to 10.7.2 will see when they upgrade…as early as tomorrow. Two more wizard shots below:

*thanks 9to5mac*

Send us a story or tip @ and follow our pages for the latest limera1n, rubyra1n, and all tech stories, follow us on Twitter at @iphonepixelpost or @limerain_com
And like our Facebook page
- Posted using my iPhone 4

Vodafone Germany mentions 16/32/64GB iPhone 4S, 8GB iPhone 4

Following a questionable list of iPhone 5 specs seen on the website of carrier Cincinnati Bell and iPhone 5 references that surfaced in Radio Shack’s inventory system, the German branch of British multinational carrier Vodafone – which in the United States has a 45% ownership stake in Verizon Wireless – references the yet unreleased 8GB iPhone 4 model in addition to the rumored iPhone 4S.

Specifically, as first reported by, the 8GB iPhone 4 model is being referenced on the carrier’s online store, available in black and white. Last month, Reuters reported that suppliers are building the 8GB iPhone 4 model which the news gathering organization claimed was the inexpensive iPhone the rumor-mill’s been hyping about.

In addition, the site makes mention of the black and white iPhone 4S, each in 16/32/64GB flavors, confirming the findings of 9to5Mac’s Mark Gurman. As for the Vodafone Germany, remember they are Apple’s high-profile partner that carries the iPhone 4 in Germany alongside Deutsche Telekom-owned T-Mobile. To get a clearer picture of the next iPhone, check out our exhaustive overview of late rumors and what we’re expecting from tomorrow’s presser.

*thanks 9to5mac*

Send us a story or tip @ and follow our pages for the latest limera1n, rubyra1n, and all tech stories, follow us on Twitter at @iphonepixelpost or @limerain_com
And like our Facebook page
- Posted using my iPhone 4

Co-Founder of Siri: Assistant launch is a “World-Changing Event” Interview

On Tuesday, Apple will change the way humans interact with electronic devices. All over again.

Perhaps the biggest announcement at Apple’s iPhone event on Tuesday will be Assistant, Apple’s evolution of the Siri Personal Assistant Software. Siri, you’ll remember, is the company Apple picked up for a rumored $200 million in April of last year for, in Steve Jobs’ words, its “Artificial Intelligence”, not search or speech recognition.

During Siri’s brief two months on its own, it described itself as a ‘VPA’:

Virtual Personal Assistants (VPAs) represent the next generation interaction paradigm for the Internet. In today’s paradigm, we follow links on search results. With a VPA, we interact by having a conversation. We tell the assistant what we want to do, and it applies multiple services and information sources to help accomplish our task. Like a real assistant, a VPA is personal; it uses information about an individual’s preferences and interaction history to help solve specific tasks, and it gets better with experience.
Apple has long wanted to bring an Artificial Intelligence-based Personal Assistant to the masses. In the late 80′s, Apple made the Knowledge Navigator series of videos (example below) to showcase this ambition.

In the video, the professor mentions that someone wrote an article 5 years ago trashing Jill’s research (watch from 1:25 min onwards, at 1:50 min he mentions more details) – The computer says the doctor’s name and says his article in 2006 – which means the professor is in 2011. Ha! Thanks PBHK!
The world has come a long way since then, but as you’ll see on Tuesday, Apple had remarkable foresight way back in 1987.

We had the chance to speak to Siri’s co-founder and board member, Norman Winarsky…

First, Some Background:
The device input methods we’ve used in the past have first been the keyboard, then the mouse and more recently, there is the touch interface. All of these methods, while not invented by Apple, were “mainstreamed” by Steve Jobs’ company over the last thirty-five years.

But humans didn’t evolve to communicate with keyboards or mice or even a touch screen. We’ve contorted our bodies to deal with our computer tools (Hi RSI!) but really we’re hard-wired for talking and listening – functions we, as a species, have been doing for tens of thousands of years.

Unfortunately, we haven’t yet invented a computer that can understand what we say, and more importantly, use that information to go find answers and relay that information back to us. That would require not only recognition of the language but the Artificial Intelligence to understand it, use it, and return something of value.

Well, that’s not entirely true. In 2003, the US Government began the most ambitious Artificial Intelligence program in its history called the “Cognitive Assistant that Learns and Organizes” or CALO program. The name was inspired by the Latin word “calonis”, which means “soldier’s servant”. Funded by DARPA as part of its Personal Assistant that Learns project, the program ran for five years and brought together more than 300 researchers from 25 of the top university and commercial research institutions, with the goal of “building a new generation of cognitive assistants that can reason, learn from experience, be told what to do, explain what they are doing, reflect on their experience, and respond robustly to surprise.”

The program was coordinated through SRI International in Menlo Park, CA. As the program ended in 2007, SRI took the knowledge gained by the CALO and some of its key players and formed Siri. SRI’s Norman Winarsky, the man uniquely positioned at the crossroads of the CALO project and the company spun off out of it talked to us about the implications of Apple mainstreaming ‘Assistant’.

9to5Mac: What was your role in putting together Siri?

Norm: As CALO was coming to an end, we realized that there were incredible commercial opportunities to build a smart personal assistant from what we learned over the five years of the CALO project. My job was getting funding (VC’s were Morgenthaler and Menlo Ventures) and assembling the team headed by Dag Kittlaus a former Motorola Executive. With him came Semantic Web genius Tom Gruber and Chief Architect of CALO Adam Cheyer. At the time of Apple purchase, the team was at 19 and growing. All three co-founders still work at Apple with much of the rest of the original team. I obviously stayed at SRI after the purchase.

9to5Mac: Can you tell us a little bit about getting picked up by Apple? What was the process? How did they evaluate the company? Are any of the financials available?

Norm: I am bound by non-disclosure on all of the information from the sale that is not public including the [rumored $200 million] sale price. What is notable is that Apple closed its purchase of Siri just two months after we went public with our app. You can probably draw your own conclusions from that.

9to5Mac: How important is Nuance speech recognition to the Siri technology?

Norm: It is a lot less important than you’d probably think. When we first built Siri, we use Vlingo for speech recognition and as such, at the time of purchase the speech recognition component is modular. Theoretically, if a better speech recognition comes along (or Apple buys one), they could likely replace Nuance without too much trouble. That being said, Nuance has far and away the most IP in speech synthesis technologies in the industry. We should know, SRI launched Nuance as one of our incubated companies in 1995 and it IPO’d in 2000.

9to5Mac: What kind of power does the Siri AI take? Could it have caused the delay of the next iPhone?

Norm: I’m not familiar with Apple’s roadmap and any delays but I can say that AI takes a lot of computing power. The Siri software needs to cache data, needs to access a big dataset at wide bandwidth and needs a big processor to crunch all of the numbers. When we originally released Siri for the iPhone 3GS, we had to perform all kinds of optimizations and shortcuts to get it to work efficiently. All I can say is that it will likely run much better on a faster phone.

9to5Mac: Is this Siri ‘Assistant’ a big deal?

Norm: Let me first say I have no knowledge of what Apple plans to do with the Siri purchase. I read the rumors just like everyone else and it appears that Apple is getting ready to reveal what it has done with Siri over the past year and a half (we were actually expecting it at WWDC). Make no mistake: Apple’s ‘mainstreaming’ Artificial Intelligence in the form of a Virtual Personal Assistant is a groundbreaking event. I’d go so far as to say it is a World-Changing event. Right now a few people dabble in partial AI enabled apps like Google Voice Actions, Vlingo or Nuance Go. Siri was many iterations ahead of these technologies, or at least it was two years ago. This is REAL AI with REAL market use. If the rumors are true, Apple will enable millions upon millions of people to interact with machines with natural language. The PAL will get things done and this is only the tip of the iceberg. We’re talking another technology revolution. A new computing paradigm shift.

It reminds me of another SRI Project: Doug Engelbart, Inventor of Mouse augmented human ability back in the ’60s. Just as Steve Jobs took that technology and ran with it, we believe that Apple will use Siri to start another revolution.

9to5Mac: Thanks for your time Norm. This reminds us of the Steve Jobs computer is a bicycle for the mind quote:

Some other interesting data on the founders:

The three founders are all still at Apple, though they work on other projects. We found Dag Kittlaus’ comments that he’s now making “the next big thing into a really big thing” interesting (below).


Adam Cheyer demonstrated (PDF) the CALO Express application in 2007, just before starting Siri. The application ran on Windows CE because it was aimed at government use.

Perhaps Tom Gruber at Semantic Web in 2008 just before Siri went public was most interesting:

Come back to Tuesday at 10am Pacific for all of the announcements.

*thanks 9to5mac*

Send us a story or tip @ and follow our pages for the latest limera1n, rubyra1n, and all tech stories, follow us on Twitter at @iphonepixelpost or @limerain_com
And like our Facebook page
- Posted using my iPhone 4