Category Archives: Software

A Man…

…was sitting in a wheelchair on a street. Shrugged over, eyes closed, right hand slumped over a can of beer. Right foot: old bandages. Left foot: an old shoe.

“A homeless,” I thought, my belly full of the burrito I just ate (and a little bit of beer). “Maybe I should give him some money? Nah, he’ll just spend it on bee… wait… he’ll spend it on the same things I just spent it on. Why am I judging?” I put on my helmet, my gloves, turned toward my motorcycle at the curb, argued with myself for another minute, then turned around and walked up to the man. “In a week, I’ll be at Burning Man where there are no transactions, only gifts. I should start practicing today,” I thought.

He woke up from my presence, but wasn’t looking at me at first.

“Here, have a nice night, sir” I said, putting a $20 bill into his hand. He didn’t grab it, and it almost fell out.
“What? You said what? What’s this?” he asked. And then he looked up at me. Black eyes, blacker than his skin. Ahhh, he’s blind.
“Just some money, you can take it.” I answered.
“Oh. OH! Thank.. I didn’t… thank you. But.. wait.. how much, what is it?”
“That’s 20 bucks.”
“What? Twe… twenty? Twenty dollars? Oh my. Oh my. Is it truly, in my hand, right now?”
“It is, sir.”
“This here? This here is that much?” I felt my heart retreat.
“Yes. You should hide it in a pocket.”

“Oh yes, I should, I should. Thank you, thank you,” the man mumbled, with expression on his face that I may never forget – expression of most pleasant surprise, perhaps hope of some kind. Hope, like an old friend who does not visit often. He instinctively tried to put the bill in his front shirt pocket, perhaps still half-asleep, for he was wearing a sweater that didn’t have a front pocket.

“Oh, there’s no pocket there, sir. Try a jeans pocket. Have a nice night,” I said, as I pat him on a shoulder and walked away. “Oh, oh, and you.”

I gave before, but that.. felt different. That felt substantial. Perhaps because I gave without being asked. Perhaps because I expected a druggie, but found a broken, truly disabled man.

I left him: a most unexpected surprise painted on his face.

He left me: a fresh pair of water droplets stuck in my eye sockets.

My 26-yo Friend is Dying, You CAN Help. Please help.

Suzy was an amazing human being and a healthy, well-fit MIT grad when she was diagnosed with 2 cancers at age 26. She had 6 chemos and is holding up well, but she’s now bankrupt and cannot afford the 10% downpayment for her 7th. The downpayment is $4300. This means she’ll be saddled with the other 90% as debt, BUT she may live to actually have a chance at paying it off. 

PLEASE donate what you can here: http://www.gofundme.com/zsuzsastrong.

I’ve donated hundreds of dollars over the last few months. I recently saved up to buy the camera of my dreams, a Leica; I will be returning it this week so that I can donate more money. I won’t let my friend die. 

Suzy is a scientist, a lovely human being, and a highly inspiring friend who helped me set high goals back when I was still in Ukraine. I _know_ she will have a great, positive impact on this planet and on humanity in general if she is allowed to live. Let’s please save her. 

Thank you, immeasurably.

How to mass-rename files/change prefix in Xcode

Let’s say you created a BOOMWackadoo project. You’ve been working on it for a while, and now have a ton of files starting with “BOOMWackadoo,” as well as a ton of BOOMWackadoo references in code. One day a friend says, “Why don’t you just remove BOOM? Wackadoo seems cool enough.” If you realize your friend is right, here’s how you implement the advice:

  1. If you specified you project’s prefix as “BOOM” when you created the project, change it to an empty string as shown here. (The UI is the same for Xcode 5 as it was for Xcode 4) Now close your project in Xcode.
  2. Time to mass-rename all your files on disk. Download the Automator template linked to at the end of this article and run it. Make sure that in “Rename Finder Items: Replace Text” workflow you select “full name” from the drop-down to the right of “Find” text field. FYI, I had to run this Automator template twice to get all the files renamed, for some reason.
  3. Now that you’ve renamed all the files, it’s time to change all the references in your .xcodeproj file! Open your Wackadoo.xcodeproj file in some sort of real editor, like Sublime Text 2. (Notice that step above renamed the file from BOOMWackadoo.xcodeproj to Wackadoo.xcodeproj.) In the side bar, right-click on Wackadoo.xcodeproj and select ‘Find in Folder…’ Type BOOMWackadoo in the Find field and Wackadoo in the Replace field. Push “Case Sensitive” button on the left (unless your file name casing isn’t consistent, in which case shame on you). Hit Replace, confirm. Make sure to save the changes! Now you can close Sublime.
  4. Finally, we need to change all the code references. Open your Wackadoo project in Xcode and don’t try to build lest you just like the color red. Press Command + 3 to open the Find Navigator view (or click the magnifying glass icon at the top of the Navigator side bar). Tap the “Find” text above the first text box and select Replace > Text > Starting with, then type “BOOMWackadoo” (the old name) in the search box, hit Enter. This will search for all occurrences and enable the Replace All button. But first type “Wackadoo” in the “With” text box. Now hit Replace All (if it asks if you want to enable snapshots and create one, what you pick is up to you). Screenshot of Find/Replace in Xcode

Das is it! You should be able to build and run your “BOOM”-less project now.

Testing a Camera App in iOS Simulator

As of today, iOS simulator doesn’t use your dev machine’s camera to simulate iDevice’s camera – all you get is a black screen. I walked the slow route of testing my first iOS app Progress on actual devices instead of in the simulator for a long time before I realized there’s a simple work-around for this.

I also wanted to show how to start camera asynchronously in case you weren’t already doing that, and how to use AVFoundation for the camera (I was scared of it myself at first, but do trust me when I say I regret it).

Let’s say we have a CameraViewController – how you get to it is up to you. I wrote my own container view controller that pushes/pops the camera VC and acts as its delegate. You may use a navigation controller, or display it modally, etc.

CameraViewController.h:

#import <UIKit/UIKit.h>

@protocol CameraDelegate 

- (void)cameraStartedRunning;
- (void)didTakePhoto:(UIImage *)photo;

@end

@interface CameraViewController : UIViewController

@property (nonatomic, weak) id<CameraDelegate> delegate;

- (void)startCamera;
- (void)stopCamera;
- (BOOL)isCameraRunning;

@end

Your container or calling VC will implement the CameraDelegate protocol, spin up an instance of CameraViewController, register itself as the delegate on it, and call startCamera. Once the camera is ready to be shown, cameraStartedRunning will be called on the delegate controller on the main thread. 

Starting the Camera

CameraViewController.m, part 1:

#import "CameraViewController.h"
#import <AVFoundation/AVFoundation.h>

@interface CameraViewController ()

@property (nonatomic) AVCaptureSession *captureSession;
@property (nonatomic) UIView *cameraPreviewFeedView;
@property (nonatomic) AVCaptureStillImageOutput *stillImageOutput;
@property (nonatomic) AVCaptureVideoPreviewLayer *captureVideoPreviewLayer;

@property (nonatomic) UILabel *noCameraInSimulatorMessage;

@end

@implementation CameraViewController {
	BOOL _simulatorIsCameraRunning;
}

- (void)viewDidLoad
{
    [super viewDidLoad];

	self.noCameraInSimulatorMessage.hidden = !TARGET_IPHONE_SIMULATOR;
}

- (UILabel *)noCameraInSimulatorMessage
{
	if (!_noCameraInSimulatorMessage) {
		CGFloat labelWidth = self.view.bounds.size.width * 0.75f;
		CGFloat labelHeight = 60;
		_noCameraInSimulatorMessage = [[UILabel alloc] initWithFrame:CGRectMake(self.view.center.x - labelWidth/2.0f, self.view.bounds.size.height - 75 - labelHeight, labelWidth, labelHeight)];
		_noCameraInSimulatorMessage.numberOfLines = 0; // wrap
		_noCameraInSimulatorMessage.text = @"Sorry, no camera in the simulator... Crying allowed.";
		_noCameraInSimulatorMessage.backgroundColor = [UIColor clearColor];
		_noCameraInSimulatorMessage.hidden = YES;
		_noCameraInSimulatorMessage.textColor = [UIColor whiteColor];
		_noCameraInSimulatorMessage.shadowOffset = CGSizeMake(1, 1);
		_noCameraInSimulatorMessage.textAlignment = NSTextAlignmentCenter;
		[self.view addSubview:_noCameraInSimulatorMessage];
	}

	return _noCameraInSimulatorMessage;
}

- (void)startCamera
{
	if (TARGET_IPHONE_SIMULATOR) {
		_simulatorIsCameraRunning = YES;
		[NSThread executeOnMainThread: ^{
			[self.delegate cameraStartedRunning];
		}];
		return;
	}

	if (!self.cameraPreviewFeedView) {
		self.cameraPreviewFeedView = [[UIView alloc] initWithFrame:self.view.bounds];
		self.cameraPreviewFeedView.center = self.view.center;
		self.cameraPreviewFeedView.backgroundColor = [UIColor clearColor];

		if (![self.view.subviews containsObject:self.cameraPreviewFeedView]) {
			[self.view addSubview:self.cameraPreviewFeedView];
		}
	}

	if (![self isCameraRunning]) {
		dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
			AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

			if (!self.captureSession) {

				self.captureSession = [AVCaptureSession new];
				self.captureSession.sessionPreset = AVCaptureSessionPresetPhoto;

				NSError *error = nil;
				AVCaptureDeviceInput *newVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
				if (!newVideoInput) {
					// Handle the error appropriately.
					NSLog(@"ERROR: trying to open camera: %@", error);
				}

				AVCaptureStillImageOutput *newStillImageOutput = [AVCaptureStillImageOutput new];
				NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG };
				[newStillImageOutput setOutputSettings:outputSettings];

				if ([self.captureSession canAddInput:newVideoInput]) {
					[self.captureSession addInput:newVideoInput];
				}

				if ([self.captureSession canAddOutput:newStillImageOutput]) {
					[self.captureSession addOutput:newStillImageOutput];
					self.stillImageOutput = newStillImageOutput;
				}

				NSNotificationCenter *notificationCenter =
				[NSNotificationCenter defaultCenter];

				[notificationCenter addObserver: self
									   selector: @selector(onVideoError:)
										   name: AVCaptureSessionRuntimeErrorNotification
										 object: self.captureSession];

				if (!self.captureVideoPreviewLayer) {
					[NSThread executeOnMainThread: ^{
						self.captureVideoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
						self.captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
						self.captureVideoPreviewLayer.frame = self.cameraPreviewFeedView.bounds;
						[self.cameraPreviewFeedView.layer insertSublayer:self.captureVideoPreviewLayer atIndex:0];
					}];
				}
			}

			// this will block the thread until camera is started up
			[self.captureSession startRunning];

			[NSThread executeOnMainThread: ^{
				[self.delegate cameraStartedRunning];
			}];
		});
	} else {
		[NSThread executeOnMainThread: ^{
			[self.delegate cameraStartedRunning];
		}];
	}
}

...

At the top, we have some properties:

AVCaptureSession is the main object that handles AVFoundation-based video/photo stuff. Whether you’re capturing video or photo, you use it.

AVCaptureStillImageOutput is an object you use to take a still photo of AVCaptureSession’s video feed.

AVCaptureVideoPreviewLayer is a subclass of CALayer that displays what the camera sees. Because it’s a CALayer, you can insert it into any UIView, including controller’s own view. However, to separate concerns and for easier management, I don’t recommend using VC’s view as the parent view for that layer. Here, you can see that I use a separate cameraPreviewFeedView as the container for AVCaptureVideoPreviewLayer.

There’s also a private _simulatorIsCameraRunning boolean. It’s here because when we are running the app in the simulator, we can’t use AVCaptureSession and hence we can’t ask AVCaptureSession if the camera is running. So we need to track whether camera is on or off manually.

The noCameraInSimulatorMessage property is self-explanatory and entirely optional – it just lazy-instantiates a label and adds it to the main view. viewDidLoad simply shows/hides this label as appropriate. We make a call to TARGET_IPHONE_SIMULATOR to know when we’re in the simulator, which is provided by Apple as part of TargetConditionals.h, which is a part of UIKit.

With the variables in place, the logic is simple. When startCamera is called, we check if we’re running in a simulator (TARGET_IPHONE_SIMULATOR will be YES) and, if so, set _simulatorIsCameraRunning to YES. We then tell the delegate that the camera started and exit out of the method. If we’re not in the simulator, however, then we start up an AVFoundation-based camera asynchronously. Note that if a call to [self isCameraRunning] returns YES (code for that method is in part 2 below), we simply tell the delegate that we’re already running without doing anything else.

Sidenote: if you’re way smarter than me, you may be saying, “Wait, NSThread doesn’t define executeOnMainThread selector!” I award you geek points and explain that it’s just a category method I added to ensure a block executes on the main thread – you’ll see code for it at the end of the post (tip of the hat to Marco Arment for that one).

Why So Asynchronous?

Since the camera can take a second or two to start up and perception is reality, this is your opportunity to trick the user by making the app feel faster when it’s really not any faster at all. For example, I mentioned having a custom container controller that implements the CameraDelegate protocol. When user wants to start the camera, that container VC adds the CameraViewController as a child controller but makes it’s view transparent (alpha = 0). To the user, this is unnoticeable as they keep seeing the previous VC’s view. The container VC then calls “startCamera” on the newly-added camera VC and immediately starts performing the “I’m switching to camera” animation on the currently shown VC. Our animation is a bit elaborate, but you can do whatever fits your app. Even if you just fade out the current VC’s view over 0.7 seconds or so while the camera is loading, the app will feel faster b/c something is happening.

Then, whenever the container VC receives the cameraStartedRunning message from camera VC, it quickly fades in the camera VC’s view that it kept transparent until now, and it also fades in appropriate overlay controls (such as the shutter button) if they are separate from the Camera VC’s view (in our app, they are). What I mean by “quickly fades in” is 0.25 or 0.3 seconds – but tune it as you see fit.

It’s pretty crazy how much faster even a simple fade out/fade in feels compared to just going to a black screen and waiting for the camera to spin up, even though the time it takes for the camera to open is about the same in both cases. This is true even considering the fact that our fade in animation technically delays full camera appearance by a couple hundred milliseconds. Perception is reality!

Alright, now to part 2 of CameraViewController.m:

...

- (void)stopCamera
{
	if (TARGET_IPHONE_SIMULATOR) {
		_simulatorIsCameraRunning = NO;
		return;
	}

	if (self.captureSession && [self.captureSession isRunning]) {
		dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^ {
			[self.captureSession stopRunning];
		});
	}
}

- (BOOL)isCameraRunning
{
	if (TARGET_IPHONE_SIMULATOR) return _simulatorIsCameraRunning;

	if (!self.captureSession) return NO;

	return self.captureSession.isRunning;
}

- (void)onVideoError:(NSNotification *)notification
{
	NSLog(@"Video error: %@", notification.userInfo[AVCaptureSessionErrorKey]);
}

- (void)takePhoto
{
	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		if (TARGET_IPHONE_SIMULATOR) {
			[self.delegate didTakePhoto: [UIImage imageNamed:@"Simulator_OriginalPhoto@2x.jpg"]];
			return;
		}

		AVCaptureConnection *videoConnection = nil;
		for (AVCaptureConnection *connection in self.stillImageOutput.connections)
			{
			for (AVCaptureInputPort *port in [connection inputPorts])
				{
				if ([[port mediaType] isEqual:AVMediaTypeVideo] )
					{
					videoConnection = connection;
					break;
					}
				}
			if (videoConnection)
				{
				break;
				}
			}

		[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
														   completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
		 {
			 NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];

			 [self.delegate didTakePhoto: [UIImage imageWithData: imageData]];
		 }];
	});
}
@end

stopCamera does nothing but sets our ‘fake’ boolean if we’re in the simulator, and actually stops the camera when we’re on the device. Notice that you don’t see this called anywhere inside of this controller. Why? If your case isn’t like mine, you definitely SHOULD call this from within viewWillDisappear in CameraViewController (and perhaps from within dealloc as well). Otherwise, the camera will be running in the background until iOS kills it (and I don’t know what the rules are for that), draining battery and turning your users into an angry mob.

If your case is like mine, however, your users will see some sort of edit screen after taking a picture with an option to retake their picture, and they may need to go back and forth a lot. To make sure they don’t have to wait for the camera to start up every time, I keep the camera running for 10 seconds. This means that, every time user takes a picture, I have to setup (and reset) a timer to kill the camera after 10 seconds. Where you put this timer code will depend on your app architecture, but do test well to make sure it always executes so that AVCaptureSession isn’t running in the background for long.

Back to code… The isCameraRunning variable is self-explanatory, and here we finally see our fake camera status boolean variable come in handy.

If AVCaptureSessions blows up for some reason (device out of memory, internal Apple code error, etc.) – onVideoError: notification will be sent. Notice we subscribe to it when setting up self.captureSession back inside startCamera.

Last but not least is takePhoto. I wrapped it into a dispatch call, but frankly I am not sure it helps any since Apple seems to block all threads when it extracts a photo. But, if it changes in the future, I’ll be ready. This is where the main you’ll find the main workaround code to make camera work in the simulator: we’re simply returning a random picture to the delegate. Well, I’m returning the same picture every time in this code, but you can imagine having a set of pictures from which you select a random one every time.

Wrap-Up

I’m sure you have more questions about AVFoundation such as “what is AVCaptureConnection or AVMediaTypeVideo and why do I see the word ‘video’ so often when I only care about still images?”, but this is just how it’s done and it’s outside of the scope of this post to describe why. Just copy, paste, and enjoy =) And if it breaks, don’t blame me.

Now, what calls takePhoto? Your delegate view controller, most likely. AVFoundation camera is barebones – it has no shutter button, etc. So you’ll have to create your own “overlay” controls view that shows a shutter button, retake button, maybe flash on/off button, etc. In my app the custom container controller I mentioned before is also the delegate for such an overlay view. When users tap the shutter button on the overlay view to take a photo, the overlay view sends a “plzTakePhotoMeow” message (wording is approximate) to it’s delegate (which is the custom container VC I keep talking about), and that delegate (the custom container VC) then calls takePhoto on the CameraViewController instance. In turn, when the camera VC is done getting an image from AVCaptureSession, it returns the image back to the delegate (the same custom container VC). The custom container VC can then remove Camera VC from the view hierarchy and instantiate the next VC, such as EditPhotoVC, passing it the photo from the camera.

Ok, I promised code for that NSThread category, so here it is:

@implementation NSThread (Helpers)

+ (void)executeOnMainThread:(void (^)())block
{
	if (!block) return;

	if ([[NSThread currentThread] isMainThread]) {
		block();
	} else {
		dispatch_sync(dispatch_get_main_queue(), ^ {
			block();
		});
	}
}

@end

Finally, the code for the timer to stop the camera (if you go this route) is below. First we define a few properties on the controller that is the delegate to CameraViewController (again, in my case it’s the container VC):

@property (atomic) BOOL openingCameraView;
@property (nonatomic) NSTimer *stopCameraTimer;

And then, in didTakePhoto:, canceledTakingPhoto: (if your overlay controls UI has that button), etc. spin up the timer:

[NSThread executeOnMainThread:^{
		// display the next VC here
		...

		// start a timer to shut off camera
		if (!self.stopCameraTimer || !self.stopCameraTimer.isValid) {
			self.stopCameraTimer = [NSTimer scheduledTimerWithTimeInterval:15 target:self selector:@selector(stopCameraAfterTimer:) userInfo:nil repeats:NO];
		}
	}];

The actual method the timer references is here, and it just ensures we don’t stop the camera while Camera VC is shown (that would be an oopsie) or we’re in the process of showing it (self.openingCameraView is set manually to true at the beginning of the method responsible for adding Camera VC to the view hierarchy):

- (void)stopCameraAfterTimer:(NSTimer *)timer
{
	if (!self.openingCameraView && !(self.currentlyShownVC == self.cameraVC)) {
		[self.cameraVC stopCamera];
	}
}

I hope this helps! If I made errors or you have questions, please contact me on Twitter.

Happy coding in 2014!

Progress – iPhone 4 Performance (Programming)

Merry Christmas, everyone!

I’ve been working on Progress, my first iOS app, on-and-off since February. I did 99% of the coding so far, Edward Sanchez did all of the graphical design, our friend Samuel Iglesias helped with some UI tricks, and all 3 of us collaborated on UX. Today, on my last full day of version 1.0 development, I implement the last major piece of “polish” code that I’m particularly proud of, and it’s focused on performance.

When I first ran the app on an iPhone 4, I could barely scrub through 4 full-screen pictures per second. But today, check this out (Vimeo link):

iPhone 4 Scrubbing Performance

Yes, that is iPhone 4 blazing through full-screen pictures without any trickery (no GPUImageView yet or anything, just your normal UIImageView). To get here, I had to do three things:

1. Store each photo in 3 different versions:

  • “Full-size” version at 1936×2592 (iPhone 4 camera resolution, the lowest denominator) and compressed to 40% (as in, iOS SDK’s 0.4 on a scale of 0 to 1 – this is really a much better compression than JPEG’s 40% as you will be hard pressed to notice any artifacts whatsoever). Between 150-300KB/photo.
  • “Device-optimized”, i.e w/e your device screen resolution is (so 640×960 for 3.5-inch screens, 640×1136 for 4-inch screens), compressed to 45% of the full-sized photo above. Due to double compression, 45% (0.45) was as low as I could go. About 75-100KB/photo.
  • “Low quality”, or the same resolution as device-optimized but at 25% compression level. This is about 40-50KB/photo. I can probably compress this further, but right now 25% works.

2. Use NSOperationQueues when loading the photos. I normally use GCD, but queues are perfect here. I start loading a device-optimized version for photo #1, but if photo #2 is requested while the device-optimized photo-loading queue still has an unfinished operation in it, I cancel that operation and start a low-quality photo-loading operation instead. At the same time, I throw a new NSOperation to load a device-optimized version (this time for photo #2), but with the low-quality photo operation as the dependency and with the same completion block. So as soon as the low-quality photo is loaded, the device-optimized version begins to load and then replaces the low-quality version. Or, the operation loading device-optimized version is cancelled once again if the user jumps to the next photo while all this is going on.

This is really just for devices like iPhone 4 and, to a lesser degree, iPod Touch 5G and iPhone 4S, and even there it happens so fast that the longest a user sees the low-quality version is maybe for 1/10th of a second – they don’t even get to notice the heavy compression on the low-quality photo. But they do get to notice the general shape/features, which is what the app is about, so for us it’s important to show _something_ instead of waiting for the device-optimized version.

3. The piece I added today: pre-caching. I remember reading @mattt’s article on NSCache some months ago where he talked about the mythical totalCostLimit property of NSCache. Mythical it is indeed, but Apple does use the size of an image in bytes as an example of what the property can mean. In this app, I have 3 NSCache instances, one for each of the photo versions I listed above and each with a different totalCostLimit. The NSCache instance for device-optimized photos has a totalCostLimit of 5242880, for example, which is 5MB written out as number of bytes. So when the app loads, I launch a background task to pre-cache as many device-optimized photos as I can before I hit that limit. With current average photo sizes, that’s about 50-60 pictures. I take note of all the photos I wasn’t able to cache, and then run another algorithm to cache low-quality versions of those photos (2MB limit for that cache), which is another 30-40 pictures or so. iPhone 4 manages to read and cache at least 15-20 photos per second, so by the time an average user tries scrubbing through their photos, chances are most of those photos will be already cached in either full-quality or low-quality.

The power of the last item is what the video above really demonstrates – between the time the app loaded and I started scrubbing through the photos, all of the photos were already cached. That’s what 76/76 number means in the top-right “debug” area – there were 76 total requests from the scrubber to show a photo, and for all 76 of those the app was able to show a device-optimized photo. If it wasn’t able to keep up, the third column (empty black area) would show the number of low-quality photos it would have had to fetch and show temporarily before catching up and replacing them with device-optimized versions (which was the case before today). Success!

Outside of the general performance tricks that I still need to work on, such as minimizing alpha-blending, there’s another caching-related task I want to do before I can move on with a peace of mind – forward-caching. For example, if we’re looking at photo #1 and start scrubbing to the right, going to #2, etc., and photos 2 through 9 are already cached, I need to start caching #10, 11, and so forth. When they stop on a photo, I need to cache 5 photos or whatever on each side. Sure, they’ll still catch up with me eventually if they scrub fast enough, but it will be a smooth experience for them until then. And if they slow down for just a moment, I’ll again be X cached photos ahead of them, which for normal scrubbing speed means “infinite” smooth scrubbing experience.

Forward-caching isn’t essential for version 1.0, though – I’m content to ship with what we have.

Xcode 5 Developer Preview Crashes with Alcatraz or Lin Installed

If your Xcode 5 DP crashes on launch every time, check your plug-ins directory:

~/Library/Application Support/Developer/Shared/Xcode/Plug-ins/

(Note: if you don’t see Library directory in your username directory, tell OS X to show hidden files first by typing this in the Terminal: defaults write com.apple.finder AppleShowAllFiles TRUE)

Move Alcatraz.xcplugin and Lin.xcplugin into another directory and try launching Xcode again. I found that having either one of them installed (as of June 12, 2013) causes Xcode 5 to not be able to start at all.

If you continue having crashes, try moving all of the plugins into a backup folder on the same level as the “Plug-ins” folder or higher. If Xcode launches, move plug-ins back one by one :)

Hope this helps someone!