Device Profiles / Group Folders for Cloud Captures

Hi everyone, here’s something I’ve been meaning to ask for since I got my Quad Mini:

Currently, the Cortex Cloud upload process requires users to manually upload a photo, select tags, and write a description for every single capture. When capturing a single amplifier at various gain stages or with different cabinets, this creates a massive amount of repetitive manual labor.

Because of this friction:

  • Creators are less likely to share large, high-quality packs.

  • Browsers see a cluttered “Newest” feed filled with 20 identical thumbnails of the same amp (or no thumbnail at all).

I suggest implementing a system similar to other industry leaders (like TONEX/Tone3000) where captures are organized under a “Parent” Device Profile.

  • Unified Metadata: Create a “Device” entry once (e.g., 1968 Marshall Plexi). Upload the photo, brand, and general description one time.

  • Child Captures: “Hang” individual captures under that profile. Each child capture would only need specific metadata (e.g., Gain 2, Gain 4, Bright Channel).

  • Cloud Display: On the Discovery feed, these would appear as a single “Pack” or “Folder” rather than 50 individual entries, making the UI much cleaner.

This change would improve the chance that creators share their captures (I, for one, have several that I never bothered to share even if they are quality captures that I daily use) and help users finding new sounds (the mandatory picture of the gear being captured on Tone3000 is also a great idea). It would transform the Cortex Cloud from a list of individual files into a structured library of real gear profiles and “backups”.

Curious if anyone else finds this issue an important one to address.

You got my Vote.

A lot of things could be enhanced on the Neural Cloud (better organization, voting system, grouping, etc.)

One thing that would be useful is a way to ‘preview’ the capture without having to download it ( similar to what https://www.tone3000.com/ did : select a pre-recorded DI and listen to the result , simple and effective, you can even play in ‘real-time’ , even if latency would be very high)

Doing it for captures could be enough, because for a preset it would mean that the whole QC algorithmes would need to be reimplemented in a ‘normal computer’ (ie: server) in order to render the sample… (would be cool to have ‘Cortex Native’ at that point, but I highly doubt they would ever go this route…)

I didn’t dare suggest the capture preview feature, but you are right in that it’s really cool how it’s implemented on Tone3000, I use it all the time. With the Quad plugged in there could be a way to listen to it on the actual amp/pedal block in the chain.

That said just adopting a more curated approach for how captures are organized and shared between users would be a huge improvement.

Thank you for your vote, let’s hope others join in.