Deeper Is Not Necessarily Better: Princeton U & Intel’s 12-Layer Parallel Networks Achieve Performance Competitive With SOTA Deep Networks | Synced

In the new paper Non-deep Networks, a research team from Princeton University and Intel Labs argues it is possible to achieve high performance with “non-deep” neural networks, presenting ParNet (Pa...

By · · 1 min read

Source: Synced | AI Technology & Industry Review

In the new paper Non-deep Networks, a research team from Princeton University and Intel Labs argues it is possible to achieve high performance with “non-deep” neural networks, presenting ParNet (Parallel Networks), a novel 12-layer architecture that achieves performance competitive with its state-of-the-art deep counterparts.