domingo, 1 de fevereiro de 2015

CES 2015: Asus unveils the Zenfone 2: $199 Flagship



A flagship smartphone starting at $199? Is that even possible? Well, Asus appears to have achieved it, according to their CES 2015 announcement of their new Zenfone 2 smartphone. Basically, the Zenfone 2 has specs that put it right on flagship territory, however, it's price tag is similar to mid-range phones. Costing as little as $199 before taxes, the Zenfone 2 might just have the most bang for the buck we've seen for a phone since the Moto G.

Asus' new smartphone has many variants at different price points, and instead of offering only variants with different storage capacities like most phones, the Zenfone 2 is available in different configurations for storage, RAM, and even processor speed. Of course, the $199 price tag applies to the simplest configuration, which is itself no slouch, with 16GB storage, 2GB of RAM and an Intel Z3560 processor (Quad-core 1.8GHz). Considering that most phones at this price range offer much less than that, Asus really must be congratulated for making such a good phone at this price range. 

Of course, it is very possible that Asus made some compromises to reach the low starting price for the Zenfone 2, and this can be confirmed only when the phone is actually available for purchase. For now, though, Asus has an astounding phone in their hands, at least on paper.

Asus Zenfone 2
 Body   152.5 x 77 x 10.9~3.9mm, 170g
 Display   5.5" IPS LCD fullHD (1920 x 1080, 403ppi)
 w/ Corning Gorilla Glass 3
 Data  LTE: Cat. 4 150/50 Mbps
 (FDD-LTE: Bands 1/2/3/4/5/7/8/9/17/18/19/20/28/29
 TDD-LTE: Bands 38/39/40/41)
 HSDPA
 GSM
 Connectivity   -Wireless:
 WiFi 802.11a/b/g/n/ac
 Bluetooth 4.0
 NFC
 GPS, A-GPS and GLONASS
 -Ports
 3.5mm audio jack
 microUSB 2.0 (with USB OTG)
 microSD card slot, up to 64GB
 Camera (rear)   13MP, 5-element f/2.0 lens, with dual-color LED flash. 1080p@30fps video
 Camera (front)  5MP, f/2.0 aperture lens, 85-degree wide lens
 Storage/RAM  16/32/64 GB, 2/4 GB RAM
 - Asus WebStorage: 5GB (lifetime)
 Processor  Intel Atom Z3560/Z3580 
 CPU   Quad-core Silvermont (4C/4T) 1.8GHz/2.3GHz
 GPU  PowerVR G6430 @ 533MHz (136 GFLOPS)
 OS  Android 5.0 Lollipop with Asus ZenUI
 Battery  Li-Ion 3,000 mAh, non-removable


Design


The Zenfone 2 is undeniably a beautiful phone. There are many notable details about the Zenfone 2's design, the sum of which might just make Asus' new design the best one we'll see in a while. Most notably, Asus decided to, like LG, ditch the traditional side-mounted volume keys to the back of the device, aiming a reduction in bezel width. Unlike LG's approach, though, which moves the power button to the back as well as the volume keys, Asus chose to move only the volume keys to the back, putting the power button on the top of the phone. Speaking of bezel width, Asus' new phone has exremely slim bezels, which make the phone that bit more attractive, with a relatively good screen to body ratio of 72%.

Other notable features of the design include the all-aluminium rear and the Zenfone series' signature spun metal strip on the bottom of the front of the phone. The rear of the device is entirely made of brushed metal, in a rather HTC One-like fashion, which makes for a very premium looking phone, despite the low starting price. 

ZenUI
ZenUI is what Asus calls their Android skin. Their newest version of ZenUI has Android 5.0 Lollipop on its core, the latest Android version. Overall, ZenUI is one of the best Android skins available, and is the opposite of TouchWiz. While it keeps layout and functionality close to stock, adding little bloatware, it focuses rather on changing the look of Android.

ZenUI makes Android look very sleek with a flat, modern user interface, all without being intrusive to the unexperienced user. In general, this must be one of the best Android skins, if not the best. 

Display

The Zenfone 2's display is on par with current flagships, which is very good for a phone with such a low starting price. What we have in hand is a 5.5" IPS LCD with fullHD resolution. With such a screen size, the Zenfone definitely steps into phablet territory, even if the extremely slim side bezels help reduce the phone's footprint.

If anyone's disappointed that Asus used "just" a 1080p display instead of a QHD screen: don't be. Any resolution increase from 1080p on a 5.5" phone would be very hard to notice. The difference is definitely not noticeable enough to offset the extra power consumption and performance impact that comes with QHD. At this point, it's much better to keep resolution where it is and work on other aspects of the display, and Asus' choice to keep it to 1080p might even give it a performance advantage over the competition. 

Anyways, Asus' phone ends up with a 403ppi pixel density, which should be perfectly fine unless you are trying to see some really small details. The Zenfone 2 will look as sharp as any flagship phone should look.

Processor

You can actually choose between two different processors for the Zenfone 2, depending on how much you're willing to pay for the phone. Ever so faithful to Intel, Asus has again decided to ship their flagship phone with an Atom processor, in this case, either the Z3560 or the Z3580. Since both available processor options belong to the same platform (Moorefield), both processors are physically identical, and the difference between them comes down to clock speed.

Both processors are built on a 22nm process, and include a CPU consisting of four Silvermont cores with 2MB of L2 cache. Since Silvermont is a 64-bit CPU and the Zenfone 2 runs on Android 5.0, the phone will benefit of the processor's 64-bit processing capabilities.

Also, both processors use a PowerVR G6430 GPU (same as the one on the Apple A7), with a base frequency of 457MHz and a burst frequency of 533MHz. In burst mode, the G6430's GPU can deliver a peak compute power of 136 GFLOPS, which is slightly above the Snapdragon 800's Adreno 330, but much less powerful than the Snapdragon 805's Adreno 420. The GPU is the only aspect of the Zenfone 2 which is not on par with flagships, but then again, it's hardly a slouch.

The differences between the Z3560 and the Z3580 come down to their CPU clock speed. While the Z3560 has its four cores clocked at up to 1.83GHz, the Z3580 can go all the way to 2.33GHz. That's a 500MHz increase for each core, which should definitely make a difference in usage...and in battery life. 

Conclusion

A flagship phone for $199. That is a combination most people would say is impossible. However, at least in paper, Asus really didn't cut back on any features to achieve the low price. Every aspect of this phone is flagship-level, and this achievement could make the Zenfone 2 a very successful smartphone. 

You get everything that you can possibly ask for from a flagship - A beautiful design, the latest Android version, good cameras (although 4K recording would have been appreciated), a large, high-quality, high-resolution display; a good processor too (even if not exactly among the best). All for a very low starting price. Even if the $199 version of the phone has "only" 2GB of RAM and a lower-clocked CPU, you're still getting much more than any other phone on this price range.

In general, Asus has done very well with its latest flagship. It has the specs of a flagship and the price of a mid-ranger. How that achievement will translate to sales figures is yet to be seen, but if Asus gets availability and marketing right, it's Zenfone 2 could be a huge success. 

segunda-feira, 19 de janeiro de 2015

CES 2015: Nvidia Announces the Tegra X1 Processor


During this year's Consumer Electronics Show (CES), Nvidia has given us a glimpse of their next generation mobile processor. And it's fantastic. For the first time implementing Nvidia's new mobile first strategy, they were able to quickly port their latest graphics architecture on to their Tegra line. As a result, the new Tegra X1 processor has a GPU based on Nvidia's brand new architecture, dubbed Maxwell. Thanks to its advanced power efficiency, Nvidia was able to build a very, very powerful GPU for the Tegra X1 without exceeding the power budgets that define the ultra-mobile market. The Tegra X1 will be most likely destined for high-performance and gaming tablets, and maybe even high-end Chromebooks.

Lithography

The truth is that 28nm is getting old. This generation, we're starting to see manufacturers moving to the smaller 20nm process node. Apple has done it with its A8 and A8X SoCs, and Samsung has done it with its Exynos 5433 processor. Nvidia has also now jumped on the 20nm wagon with the Tegra X1. Built by TSMC, the Tegra X1 is Nvidia's first SoC to benefit from a 20nm process node, and that should really help to keep the power consumption in check, which is necessary for Nvidia, especially considering the extremely beefy GPU.

CPU

Unlike with the Tegra K1, which had a 32-bit Cortex-A15 version and a 64-bit Denver version announced, for the Tegra X1 Nvidia has so far only mentioned one version, which ditches their own Denver core in favor of complete ARM-designed cores. It's still 64-bit, luckily. The Tegra X1 features a big.LITTLE CPU configuration, with four high-performance Cortex-A57 cores, and four Cortex-A53 cores designed for low-power operation. At this point, clock speeds are not specified, however. It must be pointed out that, unlike with Samsung's Exynos 5433, which also uses big.LITTLE quad-core Cortex-A57s and A53s, the Tegra X1's CPU cannot use both CPU clusters at the same time, so it can't be considered an actual octa-core CPU. Which is a good choice by Nvidia, considering that the Cortex-A57 is already a very powerful core, and considering how most applications don't scale well beyond four cores. At this point, performance estimates cannot be made due to the unannounced clock speeds, but given that the Tegra X1 is meant for high-end tablets, I imagine that the clocks will be pretty high, and it's pretty safe to say that the Tegra X1 will be no slouch when it comes to CPU performance.

Of course, another variant of the Tegra X1 with a Denver CPU is definitely very possible, but like with the Tegra K1, such a variant would only be released later into the year. In truth, I'm surprised that Nvidia didn't make Denver the default CPU configuration for Tegra X1. It showed very good performance in the Nexus 9, and I would like to have seen the Tegra X1 launch with it.

GPU

This is, of course, the spotlight of Nvidia's announcement. Nvidia's mobile first development strategy has enabled them to adapt their latest GPU architecture for mobile very quickly, and that will represent a huge advantage for Nvidia over the competition. With a 2x performance/watt advantage compared to Kepler, the new Maxwell architecture is extremely power efficient, delivering much more performance than Kepler, without using any more power.

For the Tegra K1, Nvidia had a single Kepler SMX (192 CUDA cores) running at up to 950MHz (although the devices that launched with it usually kept the clock speed at 850MHz). A single SMX had four ROPs (Render Output units) and 8 TMUs (Texture Mapping Units). However, with the Tegra X1, Nvidia is moving to two of Maxwell's basic graphics units, named SMM. Each contains 128 CUDA cores, therefore, the Tegra X1 has a total of 256 CUDA cores. These are accompanied by 16 ROPs and 16 TMUs, all this at, according to Nvidia, a max clock speed of 1GHz. This clockspeed sounds even a bit preposterous, and it is very possible that tablets running on Tegra X1 will keep the GPU clock at a bit less than that, for thermal and power budget purposes.

Wrapping up the technical stuff, here's a table comparing Nvidia's last few SoCs:

Tegra X1 Tegra K1 Tegra 4 Tegra 3
 CPU   64-bit Quad-core Cortex-A57 + Quad-core Cortex-A53   32-bit Quad-core Cortex-A15 @ 2.3GHz + Single "companion" Cortex-A15 core  or 64-bit Dual-core Denver @ 2.5GHz  32-bit Quad-core Cortex-A15 @ 1.9GHz + Single "companion" Cortex-A15 core @ ~800MHz  Quad-core Cortex-A9 @ 1.6GHz + Single "companion" core @ ~500MHz
 Lithography   20nm  28nm  28nm  40nm
 GPU core configuration   256 CUDA cores
 16 ROPs
 16 TMUs
 192 CUDA cores
 4 ROPs
 8 TMUs
 48 Pixel shaders
 24 Vertex shaders
 8 Pixel shaders
 4 Vertex shaders
 GPU clock  1,000MHz  950MHz  672MHz  520MHz
 FP32 Peak Compute power (GFLOPS)  512  365  97  12.5
 Pixel Fill Rate (MP/s)  16,000  3,800  ?  ?
 Texture Fill Rate (MT/s)  16,000  7,600  ?  ?
 Memory Interface   Dual-channel 64-bit LPDDR4-1600 (25.6GB/s)  Dual-channel 64-bit LPDDR3-1066 (17GB/s)  Dual-channel 32-bit LPDDR3-1866 (15GB/s)  Single channel 32-bit LPDDR3-1600 (6.4GB/s)

The table clearly shows how far Nvidia has come since the Tegra 3. The Tegra X1 is a huge leap forward compared to the K1, in every aspect, especially in the graphics department. The Tegra X1's GPU is far beyond what previous-gen consoles like the Xbox 360 and PS3 could achieve, and even some current low-end dedicated laptop GPUs are less powerful than the Tegra X1's GPU. Kudos to Nvidia for this impressive achievement.


Conclusion

Nvidia's new focus on bringing their latest GPU architectures to mobile is doing them a lot of good. The new Tegra X1 has a very, very large graphics processor, but with the benefit of the Maxwell architecture's astounding power efficiency.

In general, Nvidia's new processor is a great package overall, showing off excellent specs and top notch future proofing. Everything that's necessary for a new high-end SoC is there: 64-bit processing, 20nm process, for instance. While in most aspects Nvidia is playing in equal ground with other flagship mobile processors, its GPU sets it apart from anything else on the market now. The Tegra K1 was already ahead of pretty much every other SoC, except for the Apple A8X, in graphics benchmarks. Now the Tegra X1 will help Nvidia extend their lead even further. I was only a bit disappointed that Nvidia, at least for now, is not making use of its Denver CPU cores for the Tegra X1 (and considering how well they performed in the Nexus 9, one might wonder why Nvidia made this decision).

In terms of actual products that might eventually carry the Tegra X1, I believe that smartphones are still out of the picture. Despite the 20nm process and Maxwell's efficiency, a 256-core GPU might still be too much for a smartphone's battery size and thermal dissipation capacity. However, I can speculate that maybe a Tegra X1 with a much lower clocked GPU could make its way into a high-end phablet. 

Overall a great package, with the added benefit of a GPU that rivals even some lower-end dedicated laptop GPUs. Nvidia did a great job with its new SoC, and while it may still not be fit for smartphones, the Tegra X1 is just perfect for high-end tablets and any form of compact gaming devices, and might just turn out to be this year's most powerful SoC. 

sexta-feira, 16 de janeiro de 2015

Samsung Galaxy Note 4 & Note Edge Review



Samsung's Galaxy Note line went from being a doubtful niche attempt to essentially the most popular line in Samsung's inventory in just four years. Naturally, Samsung's 2014 Note phablets were highly anticipated, and Samsung delivered accordingly. This time around, the direct successor to last year's Note 3, the aptly named Note 4, came accompanied by a very interesting attempt from Samsung to make a popular phone making use of its curved display technology, the Galaxy Note Edge. Very similar to the Note 4, except with a cuved extension of the main display replacing the right side of the device. 
Both phones excel in almost very aspect you can think of, and are definitely very worthy successors to last year's Galaxy Note 3. Those who are adventurous, and are looking for something new and different, will look to the Note Edge, while those who just want a traditional phablet, albeit in this case, the best one in the market, will go for the regular Note 4. 

To begin this review, let's look at how this year's Galaxy Notes fare in terms of pure specs, listed in the table below:

Galaxy Note 4 Galaxy Note Edge
 Body   153.5 x 78.6 x 8.5mm
 176g 
 151.3 x 82.4 x 8.3mm
 174g 
 Display   5.7" Super AMOLED QHD (2560 x 1440, 515ppi)   5.6" Super AMOLED WQXGA (2560 x 1600, 524ppi) with curved edge
 Storage & RAM  32/64 GB, 3GB RAM   32/64 GB, 3GB RAM
 Networks  GSM, HSDPA, LTE Cat. 6  GSM, HSDPA, LTE Cat. 6
 WiFi   dual-band 802.11 a/b/g/n/ac   dual-band 802.11 a/b/g/n/ac
Bluetooth  Bluetooth 4.1 LE  Bluetooth 4.1 LE
Camera (Rear)   16MP with OIS, LED flash, face detection and HDR
 4K@30fps (2160p) video, 1080p@30fps or 720p@120fps with video stabilization
  16MP with OIS, LED flash, face detection and HDR
 4K@30fps (2160p) video, 1080p@30fps or 720p@120fps with video stabilization
Camera (Front)  3.7MP with 2K@30fps (1440p) video   3.7MP with 2K@30fps (1440p) video
 OS  Android 4.4 KitKat w/ TouchWiz UI  Android 4.4 KitKat w/ TouchWiz UI
 Processor   SM-N910S: Snapdragon 805
 SM-N910C: Exynos 5433
 Snapdragon 805
 CPU  SM-N910S: Quad-core 32-bit Krait 450 @ 2.7GHz
 SM-N910C: Octa-core 64-bit bit.LITTLE (Quad-core Cortex-A57 @ 1.9GHz + Quad-core Cortex-A53 @ 1.3GHz)
 Quad-core 32-bit Krait 450 @ 2.7GHz
 GPU  SM-N910S: Adreno 420 @ 600MHz (337.5 GFLOPS)
 SM-N910C: Mali-T760MP6 @ 700MHz (204 GFLOPS)
 SM-N910S: Adreno 420 @ 600MHz (337.5 GFLOPS)
 Battery  Removable Li-Ion 3,200mAh
 Video playback: 14 hours
 Removable Li-Ion 3,000mAh
 Video playback: 12 hours
 Features  Heart rate and Sp02 sensor, fingerprint scanner, S Pen  Smart Edge screen, heart rate and Sp02 sensor, fingerprint scanner, S Pen


Design

This is probably the only aspect where the Galaxy Note 4 and the Galaxy Note Edge actually differ significantly. While the Galaxy Note 4 is the regular shape we've come to expect phones to have, with a flat screen on the front, the Galaxy Note Edge makes an interesting use of Samsung's curved screen technology. Instead of having a flat screen covering the front, the Note Edge's panel cascades down the right edge of the phone, replacing the entire right side of the device. While it makes for a very interesting phone, from an aesthetic point of view, what is even better is the added functionality that the curved display offers. The side screen can be used for many things, like quick notifications, controls, and app shortcuts. While it won't exactly revolutionize the smartphone experience, it is a convenient extra to have.

Lefties beware! Since the Note Edge curves down the right side of the phone, it is less convenient for lefties to use. In the future, we might be seeing phones that curve down both sides, but until then, this device is more appropriate for right-handed users. 

In any case, aside from the curved screen, the Galaxy Note Edge and the Note 4 are very similar. Taking a leaf from the Galaxy Alpha's book, the two phablets' sides are made of aluminium (or in the case of the Note Edge, three of the four sides), with the back panel made of plastic, textured to resemble leather. The back cover of the new Notes are still removable, and so is the battery. While I would've liked to see an all-aluminium design, the new Notes feel very good in hand thanks to the metal sides. The front of the devices resembles any recent Samsung device, with the usual physical home button sitting in between Task Switcher and Back capacitive buttons. Underneath the home button there is Samsung's swipe-based fingerprint scanner.

Overall, a very good design for the new Notes. The aluminium sides, especially, are a very welcome addition to the phablets' designs, especially coming from Samsung. And at long last, Samsung has found a very non-disruptive and useful way to implement their curved AMOLED screens on a smartphone.

Display

When it comes to displays, Samsung's flagships are always highly anticipated, as their AMOLED screens are always among the best in the mobile space. Both the Note Edge and the Note 4 feature Super AMOLED screens with a Diamond PenTile pixel matrix. Of course, the main difference here is that one has a flat screen and the other has a curved one.

The Note 4 features a 5.7" display, just like the Note 3, except that this time the resolution is bumped to a stunning 2560 x 1440 resolution, which results in a fine 515ppi. This pixel density is really starting to get close to the limit after which the human eye cannot process any further details, but still, so far there are still advantages to be had with the extra resolution.

The Note Edge features a curved 5.6" display with 2560 x 1600 resolution, which translates into a pixel density of 524ppi. In comparison to the Note 4, the extra 160 pixels form the extra edge display portion. That aside, the Note Edge should be identical to the Note 4's display, offering the same benefits of Samsung's AMOLED technology, like extremely saturated colors and stunning contrast.

Software & Features

Both the Galaxy Note 4 and the Note Edge run on an Android 4.4.4 KitKat build skinned with Samsung's TouchWiz UI. An update to Android 5.0 Lollipop should be coming out soon.

TouchWiz is known for its various software features, some useful, many useless. The same can obviously be said about the new Galaxy Notes' software. Features like Multi Window and Air Gestures continue to be implemented and refined in Samsung's new phablets. Despite how heavy the whole TouchWiz package is, the 3 GB of RAM and powerful processors should keep the new Notes running very smoothly, no matter what you throw at it. 

Of course, these being devices from the Note range, a very important part of the package is the S Pen, Samsung's active stylus, which is better than ever this time around. More than ever, the S Pen lends itself to making the Note experience much more interactive and productive than on any other device.

Processor & Performance

As every flagship device these days, the Galaxy Note 4 and Note Edge are powered by some of the most powerful processors currently available. There is a distinction to be made, however. While the Galaxy Note 4 is available either with a Snapdragon 805 or an Exynos 5433 processor, depending on the region, the Galaxy Note Edge uses only the Snapdragon 805.

The Snapdragon 805, built on a 28nm HPm process (which is starting to get old), consists of a quad-core Krait 450 CPU clocked at up to 2.7GHz. The CPU is 32-bit, so when Lollipop comes to the Snapdragon 805-powered Notes, the new OS's 64-bit support will be of no benefit. Alongside the CPU, there is an Adreno 420 GPU clocked at 600MHz and a dual-channel 64-bit LPDDR3-1600 memory interface, offering ample bandwidth at a peak 25.6GB/s.

Samsung's Exynos 5433 processor, used in some variants of the Note 4, but not on the Note Edge, is a totally different beast. Built on Samsung's cutting-edge 20nm HKMG process node, it consists of a big.LITTLE CPU configuration, with four high-performance Cortex-A57 cores clocked at 1.9GHz, and four low-power, low-performance Cortex-A53 cores clocked at 1.3GHz. When necessary, both CPU clusters can work together, essentially making it a true octa-core CPU. Backing up this beastly CPU is ARM's Mali-T760 GPU clocked at 700MHz, and to wrap it off (not so well), the system is fed by a 32-bit dual-channel LPDDR3-1650 memory interface capable of delivering up to 13.2GB/s of bandwidth. This is much less than what the Snapdragon 805's memory interface can deliver, and I'm not sure 13.2GB/s can cut it for such a high-resolution display. This could only pose a potential bottleneck issue when running bandwidth-heavy games at the Note 4's native resolution. 

All variants of the Note 4 and Note Edge carry 3 GB of RAM, which should be more than enough space, even with Samsung's heavy TouchWiz features eating up memory.

Battery

Considering the large, high-resolution display and the powerful processors, the Galaxy Note 4 and Note Edge require a large battery to keep them running for long enough. The Galaxy Note 4 has a large 3,220 mAh battery feeding it, which should be enough, despite the power hungry internals. However, probably because of the curved display portion, Samsung had to reduce the Galaxy Note Edge's battery size down to 3,000 mAh. While this is still a large battery, it means that the Galaxy Note Edge will hardly win any battery life tests.

Conclusion

The new Galaxy Note 4 and Note Edge are excellent devices, delivering the goods in pretty much every aspect you can possibly think of. Beautiful screen, attractive design, powerful hardware and, of course, the venerable S Pen all make the new Notes very worthy successors of the flagship Note line.

The Note 4 is not exactly revolutionary compared to last year's Galaxy Note 3, however, every aspect of it is improved in comparison, securing Samsung's advantage in the phablet market. The Galaxy Note Edge, however, is more like Samsung still experimenting ways to implement its curved displays, and is so far Samsung's best attempt at it. Without impairing the usability of the device, Samsung managed to implement its curved screen technology in a way that not only made for an aesthetically pleasing phone, but also added functionality that people might actually use, unlike previous attempts (Galaxy Round, I'm talking to you).

Overall, Samsung's 2014 Note devices are their best phablets ever, and probably the best in the entire phablet market, scoring high marks in every aspect you can think of.

sexta-feira, 12 de dezembro de 2014

Apple A8 vs Snapdragon 805 vs Exynos 5433 - Smartphone SoC Comparison (2014 Edition)

Smartphones have become just about the most important gadget in a person's life, since it has positioned itself as a sort of a does-everything device (recently, even measuring your heart rate). As such, it needs a powerful processor to keep things going smoothly, even with so many utilities and features baked in. Accordingly, smartphone processor performance has seen exponential growth over the last few years, blazing past even some older laptops at this point. This year of 2014 we have the latest and greatest ultra-mobile processors shipping in devices in time for the holiday season. Among the best competitors we have: Apple, with its A8 processor, found in the iPhone 6 and 6 Plus, Qualcomm, with its latest and greatest Snapdragon 805, and Samsung's Octa-core Exynos 5433 SoC. It's no doubt that all three processors are performance monsters, but which of them offers the best performance, and more importantly, which one is the most power efficient?

Firstly, let's see how these processors compare on paper:

Apple A8 Snapdragon 805 Exynos 5433
 Process Node   20nm  28nm HPM  20nm HKMG 
 CPU  Dual-core 64-bit "Enhanced Cyclone" @ 1.4GHz  Quad-core 32-bit Krait 450 @ 2.7GHz  Octa-core 64-bit big.LITTLE (Quad-core ARM Cortex-A57 @ 1.9GHz + Quad-core ARM Cortex-A53 @ 1.3GHz)
 GPU  PowerVR GX6450 @ 450MHz (115.2 GFLOPS)  Adreno 420 @ 600MHz (337.5 GFLOPS)  Mali T760-MP6 @ 700MHz (204 GFLOPS)
 Memory Interface   Single-channel 64-bit LPDDR3-1600 (12.8GB/s)  64-bit Dual-channel LPDDR3-1600 (25.6GB/s)  32-bit Dual-channel LPDDR3-1650 (13.2GB/s)


At least on paper, all three processors are extremely powerful and very competitive when it comes to power and efficiency. However, the three SoCs use extremely different approaches to achieve their performance. While Apple prefers to have a smaller CPU core count, whilst making the core itself very large to achieve a high performance with just two cores, Samsung's quantity-over-quality philosophy means that they chose to throw in a very large number of CPU cores (in fact, eight of them). Qualcomm sits between Apple and Samsung, offering four CPU cores with decent per-core performance. In practice, given that most applications do not scale performace very well beyond two cores, I personally prefer Apple's approach, however a more limited selection of apps that can actually utilize a large core count, for instance, games which involve more complex physics calculations, might see Samsung's approach as the fastest option. Either way, the most accurate way of comparing the performance of these processors is using synthetic benchmarks.

Let's start with the GeekBench 3 benchmark, which tests CPU performance:
As you can see, Apple's second generation Cyclone core is just about the fastest core used in any current smartphone. Nvidia's Denver CPU core used in their Tegra K1 SoC outperforms the Cyclone core, but since the Tegra K1 is pretty much a tablet-only platform, I'm not considering it in this comparison. Meanwhile, The also 64-bit Exynos 5433, while behind the A8 by a large margin, is slightly above the Snapdragon 805. I also included data from the Snapdragon 801 chipset to quantify the evolution of the Krait 450 core in the Snapdragon 805 compared to its predecessor, Krait 400. The difference isn't big, actually, which makes for the fact that the Snapdragon 805 has the weakest single-threaded performance of all current high-end SoCs.
With four high-performance CPU cores aided by another four low-power cores (yes, Samsung managed to make both core clusters work at the same time, unlike with their previous big.LITTLE CPUs), it was obvious from the start that Samsung's processor would come out on top in applications that scale to multiple cores. In fact, the Exynos 5433's multi-threaded performance has a significant advantage over the competition. In second place comes the Snapdragon 805, with a much lower yet still very high score. Again, the multi-threaded test shows only a marginal improvement over the Snapdragon 801. And in last place comes Apple's dual-core A8, which, despite employing a very powerful core solution, simply had too few cores to outperform the competition. Still, it's not far behind the Snapdragon 805, and its score is very respectable indeed. 

Now, moving on to what probably is considered the most important area in SoC performance: graphics. To measure these processors' capability for graphics rendering, we turn to the GFXBench 3.0 test.
It's reasonable to say that the three main competitors in the high-end SoC segment are pretty much on par in terms of their GPUs' OpenGL ES 3.0 performance. However, the PowerVR GX6450 in the iPhone 6 Plus takes the lead, followed closely by the Snapdragon 805's Adreno 420, and in last place is the Mali-T760 in the Exynos 5433, but again, losing by a small margin.
For OpenGL ES 2.0 performance we see the performance gap widen, however the same basic trend can be seen: The Apple A8 takes first place, followed closely by the Snapdragon 805 and a bit further behind we have the Exynos 5433. Also note how, unlike what we've seen in the CPU benchmarks, this time the Snapdragon 805 gets a huge boost compared to its predecessor, the Snapdragon 801.
The ALU test focuses on measuring the GPU's raw compute power, and on this front, Qualcomm seems to be sitting very comfortably, since both the Snapdragon 805 and its sucessor the 801 are far ahead of the Apple A8 and the Exynos 5433 GPUs.

The Fill test depends mostly on the GPU's Render Output Units (ROPs) and on the SoC's memory interface. Given that the Snapdragon 805 has a massive memory interface, comparable to the one on Apple's tablet-primed A8X chip, it naturally had a huge advantage in this test. Meanwhile, the Apple A8 is slightly below the last-gen Snapdragon 801, and the Exynos 5433 comes in last place, but by a small margin.

Power Consumption and Thermal Efficiency

Since these chips are supposed to run inside smartphones, a lot of attention has to be given for the SoC to fulfill two requirements: consume as little power as possible, especially during idle times, and not heat up too much when under strain. I believe that Apple's A8 chip fares best in this department, because apart from being built on a 20nm process, it's Cyclone CPU has proved to be quite efficient in previous appearances. As for Samsung's Exynos 5433, despite being built on 20nm too, I'm not sure that a processor that can have 8 CPU cores running simultaneously can keep itself cool when under strain without thermal throttling. Although at least, in terms of power consumption, idle power should be very low thanks to the low-power Cortex-A7 cores. Finally, it's a bit hard to determine how power efficient Qualcomm's processors are because the company discloses close to nothing about its CPU and GPU architectures. However, it is a proved solution. Krait + Adreno SoC's from Qualcomm can be found on almost every flagship smartphone from 2014, so while it has the disadvantage of still not having moved to 20nm, experience from the past proves that their SoCs and architectures are sufficiently efficient. 

Conclusion

It's a bit hard to determine exactly which processor is the best. Each one of these fares better than the others in at least one area, but each also has its clear weakness.

The Apple A8, using just two, however powerful, CPU cores, not to mention at relatively low clock speeds, can deliver top-notch single-threaded performance, however its low core count hurts its performance amid the quad- and octa-core competition in multi-threaded applications. Also, the PowerVR GX6450 GPU was a good choice, as at least for general gaming it appears to be the fastest solution available on any smartphone. Power consumption should be also pretty low thanks to the 20nm process used and to Apple's and ImgTech's efficient architectures.

The Snapdragon 805 is really more of an evolution of the 801, without any huge changes. For instance, it's the only 32-bit processor being compared here. However, it still manages to deliver excellent performance, building on the success of the outgoing 801. While it's single-threaded performance is a bit disappointing for a 2.5GHz CPU, it does very well in multi-threaded applications, nearing the Exynos 5433's performance. The Adreno 420 GPU also performs extremely well, losing only to the Apple A8 in GFXBench's general gaming tests and absolutely destroying the competition in terms of memory bandwidth and raw compute power. While a move to 20nm would be appreciated, Qualcomm's processors are known for being power efficient, so no problem here. 

Finally, Samsung's Exynos 5433 is really a mixed bag. It's 20nm HKMG process, together with the low-power Cortex-A7 cores, makes way for excellent power efficiency, at least in terms of idle power, and thanks to its huge core count, its multi-threaded performance is ahead of everyone else. It should be noted that, despite the 20nm process, having 8 cores running at full load might introduce the need for thermal throttling, especially in a smartphone chassis.
However, the Mali-T760 GPU employed is slightly behind the competition in terms of general gaming performance, and raw compute power is quite disappointing...thankfully, raw compute power matters little to the vast majority of users. Still, it's an excellent GPU, just not THE best. 

Overall, these are all excellent processors, each one with their respective advantages and disadvantages. It all comes down to what aspects you think is more important to you, for instance, if you value performance in multi-threaded applications, a Exynos 5433-powered device is ideal for that. For an excellent all-around package, which is also a proven solution for smartphones (plus admirable GPU compute power), pick a Snapdragon 805 device. And if you don't mind as much about multi-threaded performance, but want to have the best gaming performance in any smartphone, you can pick one of Apple's A8-powered iDevices. 

domingo, 23 de novembro de 2014

Apple A8X vs Tegra K1 vs Snapdragon 805 - Tablet SoC Comprarison (2014 Edition)

In the last few years, ultra-mobile System-on-Chip processors have made unprecedented strides in terms of performance and efficiency, advancing very quickly the standards for mobile performance. One form factor that particularly benefits from the exponential growth of SoC performance are tablets, since their large screens allow for the processors' abilities to be fully utilized. For the holiday season of 2014, we have the latest and greatest of mobile performance shipping inside high-end tablets. Apple has made a whole new SoC just for their iPad Air 2 tablet, which they call the A8X. Nvidia's Tegra K1 processor, which borrows Nvidia's venerable Kepler GPU architecture, has also appeared on a number of new high-end tablets. Finally, we also have the Qualcomm Snapdragon 805 processor found in the Amazon Kindle Fire HDX 8.9" (2014). Unfortunately, most other tablets either use the aging Snapdragon 801 processor, or in the case of Samsung's latest high-end tablets, use an even older Snapdragon 800 processor or the also old Exynos 5420 processor, which debuted with the Note 3 phablet in late 2013. In any case, at the pinnacle of tablet performance, we have the Apple A8X, the Tegra K1 and the Snapdragon 805 battling for the top spot.

 Apple A8X   Nvidia Tegra K1   Snapdragon 805
 Process Node   20nm  28nm HPM  28nm HPM
 CPU  Tri-core "Enhanced Cyclone" (64-bit) @ 1.5GHz  32-bit: Quad-core ARM Cortex A15 @ 2.3GHz
 64-bit: Dual-core Denver @ 2.5GHZ
 Quad-core Krait 450 @ 2.5GHz
 GPU  PoverVR GXA6850 @ 450MHz (230 GFLOPS)  192-core Kepler GPU @ 852MHz (327 GFLOPS)  Adreno 420 @ 600MHz (172.8 GFLOPS)
 Memory Interface  64-bit Dual-channel LPDDR3-1600 (25.6GB/s)  64-bit Dual-channel LPDDR3-1066 (17GB/s)  64-bit Dual-channel LPDDR3-1600 (25.6GB/s)


The CPU

It can certainly be said that all of this year's high-end mobile processors have excellent CPU performance. However, each manufacturer took a different path to reach those high performance demands, and that is what we'll be looking at in this section.

Starting with the A8X's CPU, what we have in hand is Apple's first CPU with more than two CPU cores. This time we have a Tri-core CPU, based on an updated revision of the Apple-designed Cyclone core, which utilizes the ARMv8 ISA and is therefore a 64-bit architecture. Clock speeds remain conservative with Apple's latest CPU, going no further than 1.5GHz. So with three cores at 1.5GHz, how does Apple get performance competitive with quad-core, 2GHz+ offerings from competitors? The answer lies within the Cyclone core.
The Cyclone CPU, now in its second generation, is a very wide core. As it is, it can issue up to 6 instructions per clock. Also, each Cyclone core contains 4 ALUs, as opposed to 2 ALUs/core in Apple's previous CPU architecture, Swift. Also, the reorder buffer has been increased to 192 instructions, in order to avoid memory stalls and to utilize more fully the 6 execution pipelines. In comparison, a Cortex-A15 core can co-issue up to 3 instructions per clock, half as much as Cyclone, and can hold up to 128 instructions in its reorder buffer, only two thirds of the amount that Cyclone's reorder buffer can hold.
By building a very wide CPU architecture, and keeping their CPUs to low core counts and clock speeds, Apple has, in one move, achieved excellent single-threaded performance, far beyond what a Cortex A15 or a Krait core can produce, while at least matching the quad-core competition in multi-threaded processing. I've always said that, due to the fact that CPU instructions tend to have a very threaded nature, CPUs should be way more efficient if they are built emphasizing single-threaded performance, and Apple continues to do the right thing with Cyclone.

The Snapdragon 805 is the last high-end SoC to utilize Qualcomm's own Krait CPU architecture, which was introduced WAY back with the Snapdragon S4. Needless to say, it's still a 32-bit core. The last revision of the Krait architecture is dubbed Krait 450. While Krait 450 carries many improvements compared to the original Krait core, the basic architecture is still the same. Like the Cortex-A15 it's based on, Krait is a 3-wide machine, capable of co-issuing up to 8 instructions at once. In comparison to Cyclone, it's a relatively small core, therefore, it won't be as fast in terms of single threaded performance. Krait 450's tweaked architecture allows it to run at a whopping 2.7GHz, or to be more exact, 2.65GHz. In the case of the Snapdragon 805, we have four of these Krait 450 cores. Qualcomm's signature architecture tweak, which involves putting each core on an individual voltage/frequency controller, allows each core to have a different frequency. That reduces the power consumption of the SoC, and should translate into better battery life. With four cores, and at such a high frequency, the Snapdragon 805's CPU gets very good multi-threaded performance, although the relatively narrow Krait core hurts single-threaded performance very much.

Finally, we have the Tegra K1 and its two different versions. The 32-bit version of the Tegra K1 employs a quad-core Cortex-A15 CPU clocked at up to 2.3GHz, and we've seen a CPU configuration like this in so many SoCs that by this point it's a very well known quantity. The interesting story here is the 64-bit Tegra K1, which uses a dual-core configuration of Nvidia's brand new custom CPU architecture, named Denver. If you don't care much to know about Denver's architecture, you'd better skip this section, because there is A LOT to say about Nvidia's custom CPU.

Denver: The Oddest CPU in SoC history

Denver is Nvidia's first attempt at making a proprietary CPU architecture, and for a first attempt it's actually very good. Some of Nvidia's expertise as a GPU maker has translated into its CPU architecture. For instance, exactly like with Nvidia's GPU architectures, Denver works with VLIW (Very Long Instruction Word) instructions. Basically, this means that the instructions are packed into a 32-bit long "word", and only then are sent into the execution pipelines.

Denver's most peculiar characteristic might be this one: it's an in-order machine, while basically every other high-end mobile CPU has Out-of-Order Execution (OoOE) capabilities. Denver's lack of a dedicated engine that reorders instructions in order to reduce memory stalls and therefore increase the IPC (Instructions Per Clock) should be a huge performance bottleneck. However, Nvidia employs a very interesting (and in my opinion unnecessarily complicated) way of dealing with its in-order architecture.

By not having a hardware OoOE engine built into the CPU, Nvidia has to rely on software tricks to reorder instructions and enhance ILP (Instruction Level Parallelism). Denver is actually not meant to decode ARM instructions most of the time. Rather, Nvidia chose to build a decoder that would run native instructions, optimized for maximum ILP. For this optimization to occur, Nvidia has implemented a Dynamic Code Optimizer (DCO). Basically, the DCO's job is to recognize ARM instructions that are being sent to the CPU frequently, translate it into native instructions and optimize the instruction by reordering parts of the instruction to reduce memory stalls and maximize ILP. For this to work, a small part of the device's internal storage must be reserved to store the optimized instructions.

One implication of this system is that the CPU must be able to decode both native instructions and normal ARM instructions. For this purpose there are two decoders in the CPU block. One huge 7-wide decoder for native instructions generated by the DCO, and a secondary 2-wide decoder for ARM instructions. The difference in size between the two decoders shows how Nvidia expects to have the native instructions being used most of the time. Of course, at the first time that a program is run, and there are no optimized native instructions ready for the native decoder to use, only the ARM decoder would be used until the DCO starts recognizing recurring ARM instructions from the program and optimizes those instructions, from which point onwards that specific instruction would always go through the native decoder. If a program ran the same instructions multiple times (for example, a benchmark program), eventually all of the program's instructions would have a corresponding native optimized instruction stored, and then only the native decoder would be utilized. That would correspond to Denver's peak performance scenario.

While Nvidia's architecture might be a very interesting move, I ask myself if it wouldn't just be easier to build a regular Out-of-Order machine. But still, if it performs well in real life, it doesn't really matter how odd Nvidia's approach was. 

Now, going on to the execution potion of the Denver machine, we see why Denver is the widest mobile CPU in existence. That title was previously held by Cyclone, with its 6 execution pipelines, however, Nvidia went a step ahead and produced a 7-wide machine, capable of co-issuing up to seven instructions at once. That alone should give the Denver core excellent single-threaded performance.

The 64-bit version of the Tegra K1 employs two Denver cores clocked at up to 2.5GHz. That makes it the SoC with the lowest core count among the ones being compared here. While single-threaded performance will most certainly be great, I'm not sure that the dual-core Denver CPU can outrun its triple-core and quad-core opponents.

In order to test that, let's start our synthetic benchmarks evalutation of the CPUs with Geekbench 3.0, which evaluates the CPU both in terms of single-threaded performance and multi-threaded performance.

CPU Benchmarks

In single-threaded applications, Nvidia's custom Denver CPU core takes the first place, followed closely by Apple's enhanced Cyclone core on the Apple A8X. Meanwhile, the older Cortex-A15 and Krait 400 CPU cores are far behind, with the 2.2GHz A15 core in the 32-bit Tegra K1 pulling slightly ahead of the 2.7GHz Krait 450 core in the Snapdragon 805. 


In multi-threaded applications, where all of the CPU's cores can be used, the A8X, with its Triple-core configuration blows past the competition. The dual-core Denver version of the Tegra K1 gets about the same performance as the quad-core Cortex-A15 Tegra K1 variant, with the quad-core Krait 450 coming in last place, but by a very, very small margin. 

Apple's addition of one extra core to the A8X's CPU, together with the fact that Cyclone is a very powerful core, make it easily the fastest CPU in the market for multi-threaded applications. While Nvidia's 64-bit Denver CPU core has some impressive performance, thanks to its wide core architecture, it's core count works against it in the multi-threaded benchmark. It is, in fact, the only dual-core CPU being compared here. Even if it's not as fast as the A8X's CPU, Nvidia's Denver CPU is a beast. Were it in a quad-core configuration, it would absolutely blow the competition out of the water.

The GPU

Moving away from CPU benchmarks, we shall now analyze graphics performance, which is probably even more important than CPU performance, given that it is practically a requirement for high-end tablets to act as a decent gaming machine. First we'll look at OpenGL ES 3.0 performance with GFXBench 3.0's Manhattan test, followed by the T-Rex test, which tests OpenGL ES 2.0 performance, followed by some of GFXBench 3.0's low level tests.

The Manhattan test puts the Apple A8X ahead of the competition, followed closely by both Tegra K1 variants, which have about the same performance, since they have the exact same GPU and clock speed. Unfortunately, the Adreno 420 in the Snpadragon 805 is no match for the A8X and the Tegra K1, something that points out the need for Qualcomm to up their GPU game.

The T-Rex test paints a similar picture, with the A8X slightly ahead of the Tegra K1, while both of the Tegra K1 variants get about the same score, and the Snapdragon 805 falls behind the other two processors by a pretty big margin.

The Fill rate test stresses mostly the processor's memory interface and the GPUs TMUs (Texture Mapping Units). Since both the Apple A8X and the Snapdrgon 805 have the same dual-channel 64-bit LPDDR3 memory interface clocked at 800MHz, the performance advantage the Snapdragon 805 has shown in comparison to the A8X can only be attributed to the possibility that the Adreno 420 GPU has better texturing performance than the PowerVX GXA6850 in the Apple A8X. Meanwhile, the two variants of the Tegra K1 feature the same memory interface, which also consists of a dual-channel 64-bit LPDDR3 interface, only with a lower 533MHz clock speed. Therefore, the Tegra K1 offers signifcantly less texturing performance compared to the A8X and the Snapdragon 805, but is a very worthy performer nevertheless.
The ALU test is more about testing the GPUs sheer compute power. Since Nvidia's Tegra K1 has 192 CUDA cores on its GPU, it naturally takes the top spot here, and by a pretty significant margin.

For some reason, all tests show the 32-bit Tegra K1 in the Nvidia Shield Tablet scoring a few more points than the 64-bit Tegra K1 in the Google Nexus 9. But given that the two processors have the exact same GPU, this difference in performance is probably due to software tweaks in the Shield Tablet's operating system, which would make sense, given that it is more than anything a tablet for gaming.

Thermal Efficiency and Power Consumption

In the ultra-mobile space, power consumption and thermals are the biggest limiting factors for peformance. As the three processors being compared here are all performance beasts, several measures had to be taken so that they wouldn't drain a battery too fast or heat up too much.

In order to keep power consumption and die size in check, Apple has decided to shrink the manufacturing process from 28nm to 20nm, a first in the ultra-mobile processor market. That alone gives it a huge advantage over the competition, since they can put more transistors in the same die area, and with the same power consumption. Since the A8X is, in general, the fastest SoC available, the smaller process node is important to keep the iPad Air 2's battery life good. 

Nvidia's Tegra K1 should also do well in terms of power consumption and thermal efficiency in situations where the GPU isn't pushed too hard. The 28nm HPM process it's built upon is nothing particularly good, but it's still not old for a 2014 processor. While the Kepler architecture is very power efficient, straining a 192-core GPU to its maximum is still going to produce a lot of heat in any case. The Nexus 9 tablet reportedly can get very warm on the back while the tablet is running an intensive game.

Finally, the Snapdragon 805 should be the less power hungry processor because it is also a smartphone processor. Given that a 5" phone can carry this processor without heating up too much or draining the battery too fast, a tablet should certainly be able to do the same. To put things in perspective, if we put the Tegra K1 or the Apple A8X inside a smartphone, both would be too power hungry and would produce too much heat to make for a decent phone. In any case, the Snapdragon 805 is, like the Tegra K1, built on a 28nm HPm process. Given that its not as much a performance moster as the other two processors mentioned here, it must be the least power hungry of all three.

Conclusion

Objectively speaking, the comparisons made here make it pretty much clear that once again Apple takes the crown for the best SoC for this generation of high-end tablet processors. Not that the competition is bad. On the contrary, Nvidia went, in just one generation, from being almost irrelevant in the SoC market (let's face it, the Tegra 4 was not an impressive processor) to being at the heels of the current king of this market (aka Apple). The Tegra K1 is an excellent SoC, and even if it can't quite match the Apple A8X, it's still quite close to it in most aspects.

Meanwhile, Qualcomm is seeing it's dominance in the tablet market start to fail. It's latest SoC, the Snapdragon 805, available even on some smartphones and phablets, is available in only one tablet, while most others carry the Snapdragon 801 or even the 800, and this is disappointing, given that a tablet can utilize the processing power more usefully than a smartphone or a phablet. Either way, the Snapdragon 805 is still a very good processor. It's just far from being the fastest. Perhaps Qualcomm should consider, like Nvidia and Apple, making a processor with extra oomph, but meant only to run inside tablets, because while the Snapdragon 805 is an excellent smartphone processor, it's not as competitive in the tablet market. 

quarta-feira, 2 de julho de 2014

Samsung Releases New Exynos Processors 5422, 5260


Samsung has prepared a series of new Exynos System-on-Chip processors for this year, attending both mid-range and high-end segments. Firstly, there's the Exynos 5 Octa 5422, which is nothing more than a higher-clocked version of the 5420 chip introduced last year (and seen on devices like the Galaxy Note 3, Note 10.1, as well as the Galaxy Tab Pro, Note Pro, and Tab S range of tablets), then there's the first processor in the Exynos 5 Hexa series, the 5260, which, as its name suggests, has a total of six CPU cores. Finally, there's the Exynos 5 Octa 5800, which is basically a 5422 adapted for use in Samsung's Chromebook 2 13.3".

The 5422 is architecturally identical to the Exynos 5420 released last year, only with higher clock speeds. It therefore consists of two CPU clusters working in a big.LITTLE configuration, one being a high-performance cluster consisting of four Cortex-A15 cores clocked at 2.1GHz, and the other being a low-power cluster containing four Cortex-A7s clocked at 1.5GHz (vs 1.9GHz for the A15s and 1.3GHz on the A7s on the Exynos 5420). The GPU is a 6-core Mali-T628 MP6 clocked at 695MHz, which gives it up to about 142 GFLOPS processing power (vs 533MHz and 109 GFLOPS for the 5420). The system is fed by a 32-bit dual-channel LPDDR3-1866 (14.9GB/s bandwidth) memory interface, same as the 5420. The increase in CPU clock speed is hardly enough to yield a noticeable performance increase, but the GPU's higher frequency compared to the Exynos 5420 is definitely a significant jump. Currently the Exynos 5422 is only available in the SM-G900H variant of the Galaxy S5.

Next up is the Exynos 5 Hexa 5260, which is more of a mid-range SoC due to its lower CPU core count and weaker GPU. This processor, much like with the Exynos 5 Octa processors, has two CPU clusters in big.LITTLE configuration. The high-performance cluster consists this time of two Cortex-A15 cores clocked at 1.7GHz, and the low-power cluster is a quad-core Cortex-A7 clocked at 1.3GHz. The GPU is ARM's Mali-T624, probably in a 4-core configuration, but the clock speed is unknown and so is the theoretical compute power. However, it can be noted that the T624 has half as many execution units per core as the T628, so considering that, plus the fact that there are two less cores than inside the T628 MP6 powering the Exynos 5420/5422, you end up with about one third of the processing power at the same clock speed for the T624 MP4 compared to the T628 MP6. Finally, the 5260 SoC is fed by a 32-bit dual-channel LPDDR3-1600 memory interface, offering 12.8GB/s peak bandwidth, which is more than enough for the CPU and GPU. So far, the Exynos 5 Hexa has only made its way to two devices: the Galaxy Note 3 Neo and the Galaxy K Zoom. 

The Exynos 5 Octa 5800 is practically identical to the 5422, except that it has slightly lower clock speeds (2.0GHz vs 2.1GHz for the A15 cluster and 1.3GHz vs 1.5GHz for the A7 cluster) and it's adapted for use in Samsung's Chromebook 2 13.3" (and perhaps other Samsung Chromebooks in the future).

There are also rumors of an upcoming Samsung Exynos chip codenamed 5433, and this chip will allegedly be competitive with Qualcomm's Snapdragon 805 processor, so if that turns out to be true I'd expect a powerful chip like this to debut in one of the most awaited devices of 2014, the Galaxy Note 4. Also, you might have noticed that all of the new processors discussed here have 32-bit CPUs, while the rest of the mobile SoC market it slowly transitioning to 64-bit, so I wouldn't be surprised if the Exynos 5433 ends up being a 64-bit CPU, perhaps a Cortex-A57/53 big.LITTLE combo. 

terça-feira, 1 de julho de 2014

Microsoft Surface Pro 3 Review


Microsoft's Surface range of tablets were never successful enough to make a big impression in the tablet market, and this is mostly attributed to the fact that the Surface tablets tried to replace both your tablet and your laptop, and thus ended up failing at replacing either properly. Now Microsoft is upping its game with the recently announced Surface Pro 3, by choosing to make it much closer to a work-oriented PC. But by doing so, Microsoft sacrificed the portability factor which benefited the previous Surface tablets' use as a tablet for entertainment. Whether Microsoft's decision to make the Surface Pro 3 more laptop than tablet was a good one or a bad one isn't yet clear, but with a higher resolution screen, better cameras, and a considerably slimmer body, the Surface Pro 3 is definitely a better device overall than its predecessor. 

Since it boasts a much larger screen than previous Surface tablets (12" compared to 10.6"), the Surface Pro 3's dimensions are inevitably larger, even though the size difference was slightly compensated with slimmer bezels on the Pro 3. Measuring 292.1 x 201.4 x 9.1mm, it's far from an ultra portable tablet. In fact, it's about the same size as Samsung's Galaxy Note Pro 12.2 tablet (295.6 x 203.9 x 8mm), which evidently has a slightly larger 12.2" screen. The Note Pro 12.2 is significantly thinner, but then again, the Note Pro 12.2 doesn't have an Intel Core CPU, and nor does it run full Windows 8.1. In fact, the Surface Pro 3 is a very thin tablet considering the hardware it packs. It also weighs 800g, which, again, is pretty light for a 12-inch tablet with an Intel Core processor. In comparison, the Galaxy Note Pro 12.2 weighs 750g.

The back of the Surface Pro 3 is made from the same VaporMg magnesium alloy construction seen in all other Surface tablets, but like with the Surface 2 it has a light-silver colored finish. This makes the tablet very sturdy, and it also looks very premium. The kickstand which has always been unique to the Surface line is back, but it's much improved. The first-gen surface's kickstand had a single 22-degree angle, which didn't make it ideal for many usage cases. The second generation had two angles, 22-degree and 45-degree. Now with the Surface Pro 3, the kickstand can open to any degree between the initial 22-degree stage and the 150-degree limit, making it far more versatile than its predecessors.

The Surface Pro 3 also features improved cameras: two 5MP units on the front and back of the device. the high-res front-facing camera makes the Pro 3 a very good device for video conferencing. As for the rear-facing camera: given that a 10-inch tablet is already very awkward for taking pictures, imagine how doing that with a 12-inch tablet would look like! In any case, it's there, and it's decent enough for a tablet.

The Surface Pro 3 marks the first upgrade in the display department since the original Surface Pro. It's bigger, has a higher resolution and a much more interesting aspect ratio. The 12-inch size offers much more screen real estate than the previous Surface Pros' 10.6" displays, with the only trade-off that the tablet becomes a bit too large to be considered a portable tablet. The resolution has been upgraded from 1920 x 1080 to 2160 x 1440, and the screen aspect ratio is now 3:2, which makes the screen less wide and taller (in other words, squarer) than the usual 16:9 Windows tablets, including the Surface Pro and Pro 2. As with the other Surface tablets, the Pro 3 uses Microsoft's ClearType technology, which fuses the display layers into a single layer, the benefit of that being less screen reflectivity and therefore better sunlight contrast ratio.

Microsoft has made the right choices with the Pro 3's display. The larger size makes its use as a productivity machine as well as a fully-fledged laptop replacement much more feasible, and the increased resolution is also a great improvement. Like I said before, increasing the screen size makes it more suitable for a laptop replacement than a proper media consumption tablet, but this compromise is probably the right one to make. 

The Surface Pro 3 may not look like it, due to its thin chassis, but it packs some very powerful hardware inside it. Like its predecessors, it's equipped with Intel Core processors, in this case, the highly power-efficient 4th-gen parts. You can buy the Pro 3 with a Core i3, a Core i5 or even a Core i7 processor inside, depending on how much you're willing to pay. The following table shows the different processor/RAM configurations and their respective price

 Price   $799   $999   $1,299  $1,549   $1,949 
 CPU   Intel Core-i3 4020Y
(2 Cores/4 Threads @ 1.5GHz)
 Intel Core-i5 4300U
(2 Cores/4 Threads @ 1.9GHz base/2.9GHz Turbo)
 Intel Core-i5 4300U
(2 Cores/4 Threads @ 1.9GHz base/2.9GHz Turbo) 
 Intel Core-i7 4650U
(2 Cores/4 Threads @ 2.7GHz base/3.3GHz Turbo)
 Intel Core-i7 4650U
(2 Cores/4 Threads @ 2.7GHz base/3.3GHz Turbo)
 GPU   Intel HD 4200 @ 200MHz base/850MHZ Turbo  Intel HD 4400 @ 200MHz base/1.1GHz Turbo  Intel HD 4400 @ 200MHz base/1.1GHz Turbo  Intel HD 5000 @ 200MHz base/1.1GHz Turbo  Intel HD 5000 @ 200MHz base/1.1GHz Turbo
 Max TDP  11.5W  15W  15W  15W  15W
 RAM   4 GB  4 GB  8 GB  8 GB  8 GB


For the tablet form factor even the Core i3 is already a very good processor and should be fine for most basic tasks. I'd say that the Core i7 model is a bit overkill for a tablet unless you want to use it for gaming or for more intense tasks. All Surface Pro 3 variants have an internal fan for cooling, but they're generally quiet and therefore shouldn't be an inconvenience. However, logically the higher CPU bins, especially the Core i7 model, are more likely to make the fan run faster and noisier, as well as use up battery faster, as all Surface Pro 3 models have the same 42Wh battery. Taking heat dissipation, fan noise, battery life and performance in consideration, I consider the Core-i5 model the ideal compromise between these variables, since it's fast and at the same time not too power hungry. 

By the way, Microsoft claims 9 hours of continuous web browsing for the Surface Pro 3, which is very good for a tablet with this hardware, although I'm not sure how the different CPU bins will vary in terms of battery life.

As with other Surface Pros, the Pro 3 comes with a digital pen stylus. This time however, it's not a Wacom digitizer, which means there are less pressure sensitivity levels (256 in the Pro 3 vs 1024 in the previous Surface Pros), but the new NTrig technology allows for some nifty software features. As usual, there's no place on the tablet's body to store the Surface Pen inside. The Surface Pro 3's new 3:2 screen aspect ratio combines very well with the stylus, as in portrait mode, the screen's proportions make it look rather like a 12" drawing pad.

Pricing and Conclusion

Microsoft is offering the Surface Pro 3 in a variety of different processor/RAM/storage options. The entry-level model has a Core-i3 CPU, 4 GB of RAM and a 64 GB SSD drive and costs $799. Then there's a Core-i5 model with 4 GB of RAM and 128 GB SSD storage, which goes for $999 and another Core-i5 model, but with 8 GB of RAM and a 256 GB SSD, costing $1,299. Then there's a Core-i7 model with 8 GB of RAM and a 256 GB SSD which costs $1,549 and finally there's the most expensive Core-i7 model with 8 GB of RAM and 512 GB of SSD storage, which costs $1,949. The Touch Cover for the Surface Pro 3 is sold separately, and costs $129. Microsoft should really have this keyboard bundled with the tablet, as the tablet itself is already very expensive, and like I said before, the Pro 3 is more of a work-oriented laptop replacement than a media consumption tablet, and in order for it to do what it does best, that is, replace your laptop, it needs the keyboard cover.

The first two Surface Pros tried to be both a laptop and a tablet, but failed at being any of the two. Now with the Surface Pro 3, it's a device that excels in productivity tasks, therefore making itself a worthy laptop replacement, and is at least usable as a tablet, but sacrifices the portability factor that is why people usually buy tablets in the first place. So while it is a compromise, it's the best one Microsoft could've made.

So what is the verdict on the Surface Pro 3? Well, it has the screen real estate, the hardware, software, and a keyboard cover to fully replace your laptop, and at the same time it's also more portable than any ultrabook out there, probably has more battery life than most of them, and can still double as a tablet, albeit a very large one. Wrap that up with a versatile kickstand, a high-res screen, a chassis that may just the the thinnest to sport an Intel Core CPU and an improved stylus and you have just about one of the best tablet-laptop hybrid devices so far, and most certainly the best Surface tablet ever released.