News


NVIDIA Discloses Next-Generation Tegra SoC; Parker Inbound?

NVIDIA Discloses Next-Generation Tegra SoC; Parker Inbound?

While NVIDIA has been rather quiet about the SoC portion of the DRIVE PX 2, it’s unmistakable that a new iteration of the Tegra SoC is present.

The GPUs and SoCs of the DRIVE PX 2 are fabricated on TSMC’s 16nm FinFET processes, which is something that we haven’t seen yet from NVIDIA. The other obvious difference is the CPU configuration. While Tegra X1 had four Cortex A57s and four Cortex A53s, this new SoC (Tegra P1?) has four Cortex A57s and two Denver CPUs. As of now it isn’t clear whether this is the same iteration of the Denver architecture that we saw in the Tegra K1. However, regardless of what architecture it is we’re still looking at a CPU architecture that is at least partially an ARM in-order core with a wide, out of order VLIW core that relies on dynamic code optimization to translate ARM instructions into the VLIW core ISA.

Based on the description of the SoC, while NVIDIA is not formally announcing this new SoC or giving it a name at this time, the feature set lines up fairly well with the original plans for the SoC known as Parker. Before it was bumped to make room for Tegra X1, it had been revealed that Parker would be NVIDIA’s first 16nm FinFET SoC, and would contain Denver CPU cores, just like this new SoC.


NVIDIA’s Original 2013 Tegra Roadmap, The Last Sighting of Parker

Of course Parker was also said to include a Maxwell GPU, whereas NVIDIA has confirmed that this new Tegra is Pascal based. Though with Parker’s apparent delay, an upgrade to Pascal makes some sense here. Otherwise we have limited information on the GPU at present besides its Pascal heritage; NVIDIA is not disclosing anything about the number of CUDA cores or other features.

NVIDIA Tegra Specification Comparison
  X1 2016 “Parker”
CPU Cores 4x ARM Cortex A57 +
4x ARM Cortex A53
2x NVIDIA Denver +
4x ARM Cortex A57
CUDA Cores 256 ?
Memory Clock 1600MHz (LPDDR4) ?
Memory Bus Width 64-bit ?
FP16 Peak 1024 GFLOPS ?
FP32 Peak 512 GFLOPS ?
GPU Architecture Maxwell Pascal
Manufacturing Process TSMC 20nm SoC TSMC 16nm FinFET

But for now the bigger story is the new Tegra’s CPU configuration. Needless to say, this is at least somewhat of an oddball architecture. As Denver is a custom CPU core, we’re looking at a custom interconnect by NVIDIA to make the Cortex A57 and Denver cores work together. The question then is why would NVIDIA want to pair up  Denver CPU cores with also relatively high performng Cortex A57 cores?

At least part of the answer is going to rely on whether NVIDIA’s software stack either uses the two clusters in a cluster migration scheme or some kind of HMP scheme. Comments made by NVIDIA during their press conference indicate that they believe the Denver cores on the new Tegra will offer better single-threaded performance than the A57s. Without knowing more about the version of Denver in the new Tegra, this is somewhat surprising as it’s pretty much public that Denver has had issues when dealing with code that doesn’t resemble a non-branching loop, and more troublesome yet code generation for Denver can take up a pretty significant amount of time. As we saw with the Denver TK1, Cortex A57s can actually be faster clock for clock if the code is particularly unfavorable to Denver.

Consequently, if NVIDIA is using a traditional cluster migration or HMP scheme where Denver is treated as a consistently faster core in all scenarios, I would be at least slightly concerned if NVIDIA decided to ship this configuration with the same iteration of Denver as in the Tegra K1. Though equally likely, NVIDIA has had over a year to refine Denver and may be rolling out an updated (and presumably faster) version for the new Tegra. Otherwise it also wouldn’t surprise me if the vast majority of CPU work for PX 2 is run on the A57 cluster while the Denver cluster is treated as a co-processor of sorts, in which only specific cases can even access the Denver CPUs.

NVIDIA Discloses Next-Generation Tegra SoC; Parker Inbound?

NVIDIA Discloses Next-Generation Tegra SoC; Parker Inbound?

While NVIDIA has been rather quiet about the SoC portion of the DRIVE PX 2, it’s unmistakable that a new iteration of the Tegra SoC is present.

The GPUs and SoCs of the DRIVE PX 2 are fabricated on TSMC’s 16nm FinFET processes, which is something that we haven’t seen yet from NVIDIA. The other obvious difference is the CPU configuration. While Tegra X1 had four Cortex A57s and four Cortex A53s, this new SoC (Tegra P1?) has four Cortex A57s and two Denver CPUs. As of now it isn’t clear whether this is the same iteration of the Denver architecture that we saw in the Tegra K1. However, regardless of what architecture it is we’re still looking at a CPU architecture that is at least partially an ARM in-order core with a wide, out of order VLIW core that relies on dynamic code optimization to translate ARM instructions into the VLIW core ISA.

Based on the description of the SoC, while NVIDIA is not formally announcing this new SoC or giving it a name at this time, the feature set lines up fairly well with the original plans for the SoC known as Parker. Before it was bumped to make room for Tegra X1, it had been revealed that Parker would be NVIDIA’s first 16nm FinFET SoC, and would contain Denver CPU cores, just like this new SoC.


NVIDIA’s Original 2013 Tegra Roadmap, The Last Sighting of Parker

Of course Parker was also said to include a Maxwell GPU, whereas NVIDIA has confirmed that this new Tegra is Pascal based. Though with Parker’s apparent delay, an upgrade to Pascal makes some sense here. Otherwise we have limited information on the GPU at present besides its Pascal heritage; NVIDIA is not disclosing anything about the number of CUDA cores or other features.

NVIDIA Tegra Specification Comparison
  X1 2016 “Parker”
CPU Cores 4x ARM Cortex A57 +
4x ARM Cortex A53
2x NVIDIA Denver +
4x ARM Cortex A57
CUDA Cores 256 ?
Memory Clock 1600MHz (LPDDR4) ?
Memory Bus Width 64-bit ?
FP16 Peak 1024 GFLOPS ?
FP32 Peak 512 GFLOPS ?
GPU Architecture Maxwell Pascal
Manufacturing Process TSMC 20nm SoC TSMC 16nm FinFET

But for now the bigger story is the new Tegra’s CPU configuration. Needless to say, this is at least somewhat of an oddball architecture. As Denver is a custom CPU core, we’re looking at a custom interconnect by NVIDIA to make the Cortex A57 and Denver cores work together. The question then is why would NVIDIA want to pair up  Denver CPU cores with also relatively high performng Cortex A57 cores?

At least part of the answer is going to rely on whether NVIDIA’s software stack either uses the two clusters in a cluster migration scheme or some kind of HMP scheme. Comments made by NVIDIA during their press conference indicate that they believe the Denver cores on the new Tegra will offer better single-threaded performance than the A57s. Without knowing more about the version of Denver in the new Tegra, this is somewhat surprising as it’s pretty much public that Denver has had issues when dealing with code that doesn’t resemble a non-branching loop, and more troublesome yet code generation for Denver can take up a pretty significant amount of time. As we saw with the Denver TK1, Cortex A57s can actually be faster clock for clock if the code is particularly unfavorable to Denver.

Consequently, if NVIDIA is using a traditional cluster migration or HMP scheme where Denver is treated as a consistently faster core in all scenarios, I would be at least slightly concerned if NVIDIA decided to ship this configuration with the same iteration of Denver as in the Tegra K1. Though equally likely, NVIDIA has had over a year to refine Denver and may be rolling out an updated (and presumably faster) version for the new Tegra. Otherwise it also wouldn’t surprise me if the vast majority of CPU work for PX 2 is run on the A57 cluster while the Denver cluster is treated as a co-processor of sorts, in which only specific cases can even access the Denver CPUs.

NVIDIA Announces DRIVE PX 2 - Pascal Power For Self-Driving Cars

NVIDIA Announces DRIVE PX 2 – Pascal Power For Self-Driving Cars

As has become tradition at CES, the first major press conference of the show belongs to NVIDIA. In previous years their press conference would be dedicated to consumer mobile parts – the Tegra division, in other words – while more recently the company’s conference has shifted to a mix of mobile and automobiles. Finally for 2016, NVIDIA has made a full transition over to cars, with this year’s press conference focusing solely on the subject and skipping consumer mobile entirely.

At CES 2015 NVIDIA announced the DRIVE CX and DRIVE PX systems, with DRIVE CX focusing on cockpit visualization while DRIVE PX was part of a much more ambitious entry into the self-driving vehicle market for NVIDIA. Both systems were based around NVIDIA’s then-new Tegra X1 SoC, implementing it either for its graphics capabilities or its compute capabilities respectively.

For 2016 however, NVIDIA has doubled-down on self-driving vehicles, dedicating the entire press conference to the concept and filling the conference with suitable product announcements. The headline announcement for this year’s conference then is the successor to NVIDIA’s DRIVE PX system, the aptly named DRIVE PX 2.

From a hardware perspective the DRIVE PX 2 is essentially picking up from where the original DRIVE PX left off. NVIDIA continues to believe that the solution to self-driving cars is through computer vision realized by neural networks, with more compute power being necessary to get better performance with greater accuracy.  To that while DRIVE PX was something of an early system to prove the concept, then DRIVE PX 2 is NVIDIA is thinking much bigger.

NVIDIA DRIVE PX Specification Comparison
  DRIVE PX DRIVE PX 2
SoCs 2x Tegra X1 2x Tegra “Parker”
Discrete GPUs N/A 2x Unknown Pascal
CPU Cores 8x ARM Cortex-A57 +
8x ARM Cortex-53
4x NVIDIA Denver +
8x ARM Cortex-A57
GPU Cores 2x Tegra X1 (Maxwell) 2x Tegra “Parker” (Pascal) +
2x Unknown Pascal
FP32 TFLOPS > 1 TFLOPS 8 TFLOPS
FP16 TFLOPS > 2 TFLOPS 16 TFLOPS?
TDP N/A 250W

As a result the DRIVE PX 2 is a very powerful – and very power hungry – design meant to offer much greater compute performance than the original DRIVE PX. Based around NVIDIA’s newly disclosed 2016 Tegra (likely to be Parker), the PX 2 incorporates a pair of the SoCs. However in a significant departure from the original PX, the PX 2 also integrates a pair of Pascal discrete GPUs on MXM cards, in order to significantly boost the GPU compute capabilities over what a pair of Tegra processors alone could offer. The end result is that PX 2 packs a total of 4 processors on a single board, essentially combining the two Tegras’ 8 ARM Cortex-A57 and 4 NVIDIA Denver CPU cores with 4 Pascal GPUs.

NVIDIA is not disclosing anything about the discrete Pascal GPUs at this time beyond the architecture and that, like the new Tegra, they’re built on TSMC’s 16nm FinFET process. However looking at the board held up by NVIDIA CEO Jen-Hsun Huang, it appears to be a sizable card with 8 GDDR5 memory packages on the front. My gut instinct is that this may be the Pascal successor to GM206 with the 8 chips forming a 128-bit memory bus in clamshell mode, but at this point that’s speculation on my part.

Update Kudos to our readers on this one. The MXM modules in the picture are almost component-for-component identical to the GTX 980 MXM photo we have on file. So it is likely that these are not Pascal GPUs, and that they’re merely placeholders.

What isn’t in doubt though are the power requirements for PX 2. PX 2 will consume 250W of power – equivalent to today’s GTX 980 Ti and GTX Titan X cards – and will require liquid cooling. NVIDIA’s justification for the design, besides the fact that this much computing power is necessary, is that a liquid cooling system ensures that the PX 2 will receive sufficient cooling in all environmental conditions. More practically though, the company is targeting electric vehicles with this, many of which already use liquid cooling, and as a result are a more natural fit for PX 2’s needs.  For all other vehicles the company will also be offering a radiator module to use with the PX 2.

Otherwise NVIDIA never did disclose the requirements for the original PX, but it’s safe to say that PX 2 is significantly higher. It’s particularly telling that in the official photos of the board with the liquid cooling loops installed, it’s the dGPUs we clearly see attached to the loops. Consequently I wouldn’t be surprised if the bulk of that 250W power consumption comes from the dGPUs rather than the Tegra SoCs.

As far as performance goes, NVIDIA spent much of the evening comparing the PX 2 to the GeForce GTX Titan X, and for good reason. The PX 2 is rated for 8 TFLOPS of FP32 performance, which puts PX 2 1 TFLOPS ahead of the 7 TFLOPS Titan X. However while those are raw specifications, it’s important to note that Titan X is 1 GPU whereas PX 2 is 4, which means PX 2 will need to work around multi-GPU scaling issues that aren’t an issue for Titan X.

Curiously, NVIDIA also used the event to introduce a new unit of measurement – the Deep Learning Tera-Op, or DL TOPS – which at 24 is an unusual 3x higher than PX 2’s FP32 performance. Based on everything disclosed by NVIDIA about Pascal so far, we don’t have any reason to believe FP16 performance is more than 2x Pascal’s FP32 performance. So where the extra performance comes from is a mystery at the moment. NVIDIA quoted this and not FP16 FLOPS, so it may include a special case operation (ala the Fused Multiply-Add), or even including the performance of the Denver CPU cores.

On that note, while DRIVE PX 2 was the focus of NVIDIA’s presentation, it was GTX Titan X that was actually driving all of the real-time presentations. As far as I know we did not actually see any demos being powered by PX 2, and it’s unclear whether PX 2 is even ready for controlled demonstration at this time. NVIDIA mentions in their press release that the PX 2 will be available to early access partners in Q2, with general availability not occurring until Q4.

Meanwhile along with the PX 2 hardware, NVIDIA also used their conference to reiterate their plans for self-driving cars, and where their hardware and software will fit into this. NVIDIA is still aiming to develop a hardware ecosystem for the automotive industry rather than an end-to-end solution. Which is to say that they want to provide the hardware, while letting their customers develop the software.

However at the same time, in an action admitting that it’s not always easy for customers to get started from scratch, NVIDIA will also be developing their complete reference platform combining hardware and software. The reference platform includes not just the hardware for self-driving cards – including the PX 2 system and other NVIDIA hardware to train the neural nets – but also software components including the company’s existing DriveWorks SDK, and a pre-trained driving neural net the company is calling DRIVENet.

Consequently while the company isn’t strictly in the process of developing its own cars, it is essentially in the process of training them.  Which means NVIDIA has been sending cars around the Sunnyvale area to record interactions, training the 37 million neuron network how to understand traffic. A significant portion of NVIDIA’s presentation was taken up demonstrating DRIVENet in action, showcasing how well it understood the world using a combination of LIDAR and computer vision, with a GTX Titan X running the network at about 50fps. Ultimately I think it’s fair to say that NVIDIA would rather their customers be doing this, building nets on top of systems like DIGITS, but they also have seen first-hand in previous endeavors that bootstrapping an ecosystem like they desire requires having all of the components already there.

Finally, NVIDIA also announced that they have lined up their first customer for PX 2: Volvo. In 2017 the company will be outfitting 100 XC90 SUVs with the PX 2, for use in their ongoing self-driving car development efforts.

NVIDIA Announces DRIVE PX 2 - Pascal Power For Self-Driving Cars

NVIDIA Announces DRIVE PX 2 – Pascal Power For Self-Driving Cars

As has become tradition at CES, the first major press conference of the show belongs to NVIDIA. In previous years their press conference would be dedicated to consumer mobile parts – the Tegra division, in other words – while more recently the company’s conference has shifted to a mix of mobile and automobiles. Finally for 2016, NVIDIA has made a full transition over to cars, with this year’s press conference focusing solely on the subject and skipping consumer mobile entirely.

At CES 2015 NVIDIA announced the DRIVE CX and DRIVE PX systems, with DRIVE CX focusing on cockpit visualization while DRIVE PX was part of a much more ambitious entry into the self-driving vehicle market for NVIDIA. Both systems were based around NVIDIA’s then-new Tegra X1 SoC, implementing it either for its graphics capabilities or its compute capabilities respectively.

For 2016 however, NVIDIA has doubled-down on self-driving vehicles, dedicating the entire press conference to the concept and filling the conference with suitable product announcements. The headline announcement for this year’s conference then is the successor to NVIDIA’s DRIVE PX system, the aptly named DRIVE PX 2.

From a hardware perspective the DRIVE PX 2 is essentially picking up from where the original DRIVE PX left off. NVIDIA continues to believe that the solution to self-driving cars is through computer vision realized by neural networks, with more compute power being necessary to get better performance with greater accuracy.  To that while DRIVE PX was something of an early system to prove the concept, then DRIVE PX 2 is NVIDIA is thinking much bigger.

NVIDIA DRIVE PX Specification Comparison
  DRIVE PX DRIVE PX 2
SoCs 2x Tegra X1 2x Tegra “Parker”
Discrete GPUs N/A 2x Unknown Pascal
CPU Cores 8x ARM Cortex-A57 +
8x ARM Cortex-53
4x NVIDIA Denver +
8x ARM Cortex-A57
GPU Cores 2x Tegra X1 (Maxwell) 2x Tegra “Parker” (Pascal) +
2x Unknown Pascal
FP32 TFLOPS > 1 TFLOPS 8 TFLOPS
FP16 TFLOPS > 2 TFLOPS 16 TFLOPS?
TDP N/A 250W

As a result the DRIVE PX 2 is a very powerful – and very power hungry – design meant to offer much greater compute performance than the original DRIVE PX. Based around NVIDIA’s newly disclosed 2016 Tegra (likely to be Parker), the PX 2 incorporates a pair of the SoCs. However in a significant departure from the original PX, the PX 2 also integrates a pair of Pascal discrete GPUs on MXM cards, in order to significantly boost the GPU compute capabilities over what a pair of Tegra processors alone could offer. The end result is that PX 2 packs a total of 4 processors on a single board, essentially combining the two Tegras’ 8 ARM Cortex-A57 and 4 NVIDIA Denver CPU cores with 4 Pascal GPUs.

NVIDIA is not disclosing anything about the discrete Pascal GPUs at this time beyond the architecture and that, like the new Tegra, they’re built on TSMC’s 16nm FinFET process. However looking at the board held up by NVIDIA CEO Jen-Hsun Huang, it appears to be a sizable card with 8 GDDR5 memory packages on the front. My gut instinct is that this may be the Pascal successor to GM206 with the 8 chips forming a 128-bit memory bus in clamshell mode, but at this point that’s speculation on my part.

Update Kudos to our readers on this one. The MXM modules in the picture are almost component-for-component identical to the GTX 980 MXM photo we have on file. So it is likely that these are not Pascal GPUs, and that they’re merely placeholders.

What isn’t in doubt though are the power requirements for PX 2. PX 2 will consume 250W of power – equivalent to today’s GTX 980 Ti and GTX Titan X cards – and will require liquid cooling. NVIDIA’s justification for the design, besides the fact that this much computing power is necessary, is that a liquid cooling system ensures that the PX 2 will receive sufficient cooling in all environmental conditions. More practically though, the company is targeting electric vehicles with this, many of which already use liquid cooling, and as a result are a more natural fit for PX 2’s needs.  For all other vehicles the company will also be offering a radiator module to use with the PX 2.

Otherwise NVIDIA never did disclose the requirements for the original PX, but it’s safe to say that PX 2 is significantly higher. It’s particularly telling that in the official photos of the board with the liquid cooling loops installed, it’s the dGPUs we clearly see attached to the loops. Consequently I wouldn’t be surprised if the bulk of that 250W power consumption comes from the dGPUs rather than the Tegra SoCs.

As far as performance goes, NVIDIA spent much of the evening comparing the PX 2 to the GeForce GTX Titan X, and for good reason. The PX 2 is rated for 8 TFLOPS of FP32 performance, which puts PX 2 1 TFLOPS ahead of the 7 TFLOPS Titan X. However while those are raw specifications, it’s important to note that Titan X is 1 GPU whereas PX 2 is 4, which means PX 2 will need to work around multi-GPU scaling issues that aren’t an issue for Titan X.

Curiously, NVIDIA also used the event to introduce a new unit of measurement – the Deep Learning Tera-Op, or DL TOPS – which at 24 is an unusual 3x higher than PX 2’s FP32 performance. Based on everything disclosed by NVIDIA about Pascal so far, we don’t have any reason to believe FP16 performance is more than 2x Pascal’s FP32 performance. So where the extra performance comes from is a mystery at the moment. NVIDIA quoted this and not FP16 FLOPS, so it may include a special case operation (ala the Fused Multiply-Add), or even including the performance of the Denver CPU cores.

On that note, while DRIVE PX 2 was the focus of NVIDIA’s presentation, it was GTX Titan X that was actually driving all of the real-time presentations. As far as I know we did not actually see any demos being powered by PX 2, and it’s unclear whether PX 2 is even ready for controlled demonstration at this time. NVIDIA mentions in their press release that the PX 2 will be available to early access partners in Q2, with general availability not occurring until Q4.

Meanwhile along with the PX 2 hardware, NVIDIA also used their conference to reiterate their plans for self-driving cars, and where their hardware and software will fit into this. NVIDIA is still aiming to develop a hardware ecosystem for the automotive industry rather than an end-to-end solution. Which is to say that they want to provide the hardware, while letting their customers develop the software.

However at the same time, in an action admitting that it’s not always easy for customers to get started from scratch, NVIDIA will also be developing their complete reference platform combining hardware and software. The reference platform includes not just the hardware for self-driving cards – including the PX 2 system and other NVIDIA hardware to train the neural nets – but also software components including the company’s existing DriveWorks SDK, and a pre-trained driving neural net the company is calling DRIVENet.

Consequently while the company isn’t strictly in the process of developing its own cars, it is essentially in the process of training them.  Which means NVIDIA has been sending cars around the Sunnyvale area to record interactions, training the 37 million neuron network how to understand traffic. A significant portion of NVIDIA’s presentation was taken up demonstrating DRIVENet in action, showcasing how well it understood the world using a combination of LIDAR and computer vision, with a GTX Titan X running the network at about 50fps. Ultimately I think it’s fair to say that NVIDIA would rather their customers be doing this, building nets on top of systems like DIGITS, but they also have seen first-hand in previous endeavors that bootstrapping an ecosystem like they desire requires having all of the components already there.

Finally, NVIDIA also announced that they have lined up their first customer for PX 2: Volvo. In 2017 the company will be outfitting 100 XC90 SUVs with the PX 2, for use in their ongoing self-driving car development efforts.

MAINGEAR Rolls-Out 34” All-in-One PC with 18-Core Xeon, GeForce GTX Titan X

MAINGEAR Rolls-Out 34” All-in-One PC with 18-Core Xeon, GeForce GTX Titan X

The concept of the all-in-one desktop personal computer was created to save space and simplify design of PCs. While there have been a number of traditional AIO desktops available over the years, leading PC makers only began to address performance-demanding market segments with specially-designed models several years ago. At the Consumer Electronics Show on Monday, boutique PC maker Maingear introduced the world’s first AIO desktop featuring top-of-the-range gaming or even server components.

The Maingear Alpha 34 is a giant all-in-one desktop with 34” curved display with 3440×1440 resolution. Unlike the vast majority of semi-custom AIO PCs, the Alpha 34 is built around standard mini-ITX motherboards — in this case the ASUS ROG Maximus VIII Impact or the ASRock X99E-ITX for high-end configurations (Intel H110-based mainboard is available as an option for lower-cost configurations). Due to the flexibility in motherboard selection, the system can use either socket 1151 or socket 2011-3 CPUs depending on the board, including Intel’s Core i3/i5/i7, or Intel Xeon E5 v3 processors with up to 18 cores and up to 45MB of cache. The AIO desktop uses the Maingear’s own closed-loop liquid cooler in order to ensure stability of desktop and server CPUs.

The Alpha 34 can be equipped with up to 32GB of unbuffered DDR4 memory, one M.2 NVMe solid-state drive and up to two 2.5” storage devices. The AIO can also accommodate full-sized desktop graphics cards, including the AMD Radeon R9 Nano, the NVIDIA GeForce GTX Titan X, or professional cards. The system naturally supports all the connectivity options provided by the aforementioned motherboards, including Gigabit Ethernet, 802.11 a/b/g/n/ac Wi-Fi, 5.1-channel audio, USB 3.0, USB 3.1 connectors and so on.

As is usually the case for botique system builders, Maingear is offering a suite of customization options to let the AIO hit different price ranges and performance levels. That said, the Alpha 34 is always equipped with a 450W power supply unit, and therefore not all setups will be feasible. Multi-core Intel Xeon processors as well as top-of-the-range graphics cards consume a lot of power and 450W may not be enough to feed all the possible configurations.

Performance of the Alpha 34 featuring the latest Core i7 processors should be on par with that of high-end tower desktops. Upgradeability of all-in-one systems is not as flexible as that of tower machines, which is one of the reasons why AIOs are not for everyone. To make the Intel Z170-based systems a little more future-proof, the PC maker offers factory overclocking for Skylake-S CPUs inside the Alpha 34.

All Maingear systems — including the Alpha 34 — can be custom painted and equipped with various peripherals like external optical drives, keyboards, mice, headsets and so on.

Pricing of the Alpha 34 starts at $1,999. A fully-fledged gaming setup with premium components, but without custom-finish and peripherals, will cost $6,150.99. A workstation machine inside the Alpha 34 chassis will be priced at around $15,000. Finally, Maingear will start to ship its Alpha 34 systems starting February 1, 2016.