If you have built in a compact case, you know the constraints. One PCIe slot, maybe two M.2 slots, and every millimetre of space accounted for. That single PCIe slot is going to your GPU, which leaves you with limited options for expansion.
For most people, onboard gigabit is enough. But when your workflow involves moving 8K footage, large render files, or ML datasets, gigabit becomes a bottleneck. You need 10GbE, and you need it without sacrificing your graphics card.
The maths works out in your favour here. M.2 slots run on PCIe lanes. A 2280 slot with PCIe 3.0 x2 gives you around 16 Gbps of bandwidth, more than enough headroom for 10 Gigabit Ethernet. The limitation was never the interface, just the availability of modules that actually use it for networking.
The Innodisk EGPL-T102 is one of the few modules on the market that does this. It is an M.2 2280 card with a Marvell AQtion controller, connected via a shielded high-frequency cable to a compact RJ45 daughterboard. The daughterboard is small, about the size of a USB port, so mounting options are flexible.
In our FormD T1 builds, we designed a custom PETG bracket to mount the RJ45 board internally, routing it to one of the rear vent areas. The bracket is 3D printed in-house so we can adjust the design depending on the specific build configuration and what other components need clearance. It is a tight fit but it works, and it keeps the external aesthetic clean.
Driver support covers Windows, Linux, and VMware. The controller handles jumbo frames and the module itself is rated for -20C to 60C, so thermal throttling has not been an issue.
Innodisk put together a case study covering how we have been using these. If you are looking at 10GbE options for an SFF build, it is worth considering.