# RabbitStream Overview

On Solana, speed is everything for high-frequency trading and sniping bots. The fastest data comes from <mark style="color:yellow;">Shreds</mark>, which are raw data packets emitted at the earliest stage of processing. While Shreds provide the earliest transaction detection (pre-processing), they are <mark style="color:yellow;">missing the final details</mark> (transaction meta) like logs and confirmed results. They still contain enough information to be useful for speed-critical sniping strategies.

{% hint style="info" %}
For developers needing the absolute lowest latency, [<mark style="color:yellow;">Rabbitstream</mark>](https://docs.shyft.to/solana-shredstreaming/rabbitstream-overview) delivers raw Solana shreds with the precision of Yellowstone-style filters.
{% endhint %}

#### The Problem with Raw Shreds

While raw shreds offer unbeatable speed, working with them have their own challenges for developers:

* **Filters not available, Data Overload**: Shreds are unfiltered, meaning you get *every* single transaction on Solana—even the internal voting and failed ones—which creates a huge data load that's hard to handle. Any form of <mark style="color:yellow;">filtering is not available</mark>.
* **Difficult Decoding Requirements**: The data <mark style="color:yellow;">structure</mark> of raw shreds is <mark style="color:yellow;">complex and binary-encoded</mark>, making them incompatible with standard Solana parsing libraries and requiring custom, time-consuming decoding logic.
* **Missing Transaction Context:** Since shreds are unconfirmed, they lack the full transaction metadata (meta), including program logs, inner instructions, and pre/post token balances. This limits them primarily to simple sniping strategies.
* **High Expense:** Few providers offer raw/decoded shred streams, and those that do are often priced out of reach for independent developers and smaller teams.

### Introducing RabbitStream: Shred Speed, gRPC Usability

RabbitStream provides <mark style="color:yellow;">real-time transactions extracted directly from shreds</mark>**,** coming straight from leader&#x73;**, and** <mark style="color:yellow;">streamed through a gRPC interface</mark>**.** It combines the delivery speed of shreds, and the usability and filtering power of Yellowstone gRPC.

<table><thead><tr><th width="235">Feature</th><th>How Rabbitstream solves it</th></tr></thead><tbody><tr><td><strong>Yellowstone gRPC Filtering</strong></td><td>Use the <strong>exact same</strong> <code>SubscribeRequest</code> format as Yellowstone gRPC. Filter by <code>accountInclude</code>, <code>accountRequired</code>, and more—no complex decoding is required on your end.</td></tr><tr><td><strong>Compatible Transaction Structure</strong></td><td>Transactions are streamed with a structure similar to Yellowstone gRPC (minus the full meta data), making it instantly compatible with existing gRPC clients and processing logic.</td></tr><tr><td><strong>Ultra-Low Latency</strong></td><td>Feel Solana's pulse as it happens. RabbitStream provides the lowest latency for transaction detection possible outside of running your own validator cluster.</td></tr></tbody></table>

### Allowed Filters in Rabbitstream: The Trade-Off

RabbitStream is designed for <mark style="color:yellow;">maximum speed</mark>, which means we get you the transaction data at the very start of the Solana process (Shredding).

Since RabbitStream <mark style="color:yellow;">capture the data before execution</mark> happens, certain information that is generated *later* in the pipeline is not available:

* <mark style="color:yellow;">**No Execution Metadata:**</mark> RabbitStream does not have fields that are created during the final processing stage (like logs, inner instructions, or precise error details).
* <mark style="color:yellow;">**Limited Filters:**</mark> Rabbitstream streams data from the pre-processed stage, and <mark style="color:yellow;">only transactions filters are allowed</mark>. It does **not allow** filtering by ***accounts*****,&#x20;*****blocks*****, or&#x20;*****slot number*** *directly* on the stream. This information is typically confirmed and compiled later by the RPC node.

<table><thead><tr><th>Filter type</th><th align="center">Yellowstone gRPC</th><th align="center" valign="top">RabbitStream</th></tr></thead><tbody><tr><td>transactions</td><td align="center">✅</td><td align="center" valign="top">✅</td></tr><tr><td>accounts</td><td align="center">✅</td><td align="center" valign="top">❌</td></tr><tr><td>slots</td><td align="center">✅</td><td align="center" valign="top">❌</td></tr><tr><td>blocks</td><td align="center">✅</td><td align="center" valign="top">❌</td></tr><tr><td>blocksMeta</td><td align="center">✅</td><td align="center" valign="top">❌</td></tr><tr><td>accountsDataSlice</td><td align="center">✅</td><td align="center" valign="top">❌</td></tr></tbody></table>

#### Speed Advantage

We have already benchmarked RabbitStream's performance against standard Yellowstone gRPC by creating a simple Pump.fun Token Launch Detector. Our initial tests reveal a consistent <mark style="color:yellow;">speed advantage</mark> ranging from <mark style="color:yellow;">\~15ms to 100ms</mark>.

[Rabbitstream Token Detector Examples \[.ts\]](https://github.com/Shyft-to/yellowstone-grpc-vs-rabbitstream)

We feel this will be a huge unlock for a lot of devs and await **feedback** from the community.
