Subscriptions to new blocks and logs (Websocket, Kafka, SNS, etc)
For new blocks and logs properly subscribe across upstreams and make sure if one is down others are used transparently Separate transport layer so that there can be multiple destinations including Websocket, Kafka, AWS SNS, Webhooks, etc

Kasra Khosravi Over 1 year ago
Subscriptions to new blocks and logs (Websocket, Kafka, SNS, etc)
For new blocks and logs properly subscribe across upstreams and make sure if one is down others are used transparently Separate transport layer so that there can be multiple destinations including Websocket, Kafka, AWS SNS, Webhooks, etc

Kasra Khosravi Over 1 year ago
Support for Generic JSON-RPC Protocols
Adding support for generic JSON-RPC protocols in eRPC, with essential features such as load balancing, fault tolerance, rate limiting, singleflight (to solve thundering herds), and monitoring. This enable eRPC to be used for various purposes, domains, and use cases, such as: • Other blockchain protocols like Bitcoin, Solana, and TON (making it possible to use eRPC for any blockchain by default) • Non-blockchain JSON-RPC applications • Custom JSON-RPC implementations This improvement would make eRPC more universal and flexible solution for handling JSON-RPC traffic across a wide range of projects.

Thanee Charattrakool About 1 year ago
Support for Generic JSON-RPC Protocols
Adding support for generic JSON-RPC protocols in eRPC, with essential features such as load balancing, fault tolerance, rate limiting, singleflight (to solve thundering herds), and monitoring. This enable eRPC to be used for various purposes, domains, and use cases, such as: • Other blockchain protocols like Bitcoin, Solana, and TON (making it possible to use eRPC for any blockchain by default) • Non-blockchain JSON-RPC applications • Custom JSON-RPC implementations This improvement would make eRPC more universal and flexible solution for handling JSON-RPC traffic across a wide range of projects.

Thanee Charattrakool About 1 year ago
Completed
Smart Batching
rpc-level batching for incoming requests auto-batch requests towards upstreams allow multi-chain requests within 1 request

Kasra Khosravi Over 1 year ago
Completed
Smart Batching
rpc-level batching for incoming requests auto-batch requests towards upstreams allow multi-chain requests within 1 request

Kasra Khosravi Over 1 year ago
Web console for config and observability
A control panel frontend. A first iteration may just be a text editor for erpc.yaml and stop/start buttons for the containers.

jaybuidl Over 1 year ago
Web console for config and observability
A control panel frontend. A first iteration may just be a text editor for erpc.yaml and stop/start buttons for the containers.

jaybuidl Over 1 year ago
Auto-batch multiple eth_calls for evm upstreams using multicall3 contracts
Most rpcs provider have a paid-per-call model. Batching eth_call with multicall3 contract would significantly reduce rpc cost on most providers

Clément Over 1 year ago
Auto-batch multiple eth_calls for evm upstreams using multicall3 contracts
Most rpcs provider have a paid-per-call model. Batching eth_call with multicall3 contract would significantly reduce rpc cost on most providers

Clément Over 1 year ago
Completed
Request decompression
It's received by the server (erpc) as a gzipped request and erpc decompresses it and passes it to the upstream servers. We're currently offering this for some of our customers via a lua script in openresty. The benefit to our customer is it helps them save egress costs when running in clouds that charge egress fees. Would love it if this was something erpc could support. For example: echo '{ "jsonrpc":"2.0", "method":"eth_chainId", "params":[], "id":1 }' | gzip | curl -i --request POST \ --url http://localhost:4000 \ --header 'Content-Type: application/json' \ --header 'Content-Encoding: gzip' \ --compressed \ --data-binary @-

Caleb Call Over 1 year ago
Completed
Request decompression
It's received by the server (erpc) as a gzipped request and erpc decompresses it and passes it to the upstream servers. We're currently offering this for some of our customers via a lua script in openresty. The benefit to our customer is it helps them save egress costs when running in clouds that charge egress fees. Would love it if this was something erpc could support. For example: echo '{ "jsonrpc":"2.0", "method":"eth_chainId", "params":[], "id":1 }' | gzip | curl -i --request POST \ --url http://localhost:4000 \ --header 'Content-Type: application/json' \ --header 'Content-Encoding: gzip' \ --compressed \ --data-binary @-

Caleb Call Over 1 year ago
Completed
Discovery and Routing
automatic finding nodes for chains + normalizing their request/responses so you don't try to find which one supports trace/geth_trace/etc

Kasra Khosravi Over 1 year ago
v0.1
Completed
Discovery and Routing
automatic finding nodes for chains + normalizing their request/responses so you don't try to find which one supports trace/geth_trace/etc

Kasra Khosravi Over 1 year ago
v0.1
Planned
Horizontal scaling via shared state
Increase RPS simply with more instances of eRPC To enable this rate limits and some other state must be shared across all instances

Kasra Khosravi Over 1 year ago
Planned
Horizontal scaling via shared state
Increase RPS simply with more instances of eRPC To enable this rate limits and some other state must be shared across all instances

Kasra Khosravi Over 1 year ago
Credit unit mapping for rate limiters
Support third-party providers (alchemy, quicknode, etc.) for rate limiting This will correctly record used quota when different methods consume different quota on providers (e.g. eth_getLogs is more expensive vs eth_blockNumber)

Kasra Khosravi Over 1 year ago
Credit unit mapping for rate limiters
Support third-party providers (alchemy, quicknode, etc.) for rate limiting This will correctly record used quota when different methods consume different quota on providers (e.g. eth_getLogs is more expensive vs eth_blockNumber)

Kasra Khosravi Over 1 year ago
Planned
Reconciliation between RPC vendors
This is useful for security/ data validation and data reconciliation. I e. You make a request, it goes to multiple (at least 2) RPC vendors, the responses are compared, if they are the same then it's successful, if not then there is an error. Maybe this is a field you can select for a given endpoint

Brendan Coughlan About 1 year ago
Planned
Reconciliation between RPC vendors
This is useful for security/ data validation and data reconciliation. I e. You make a request, it goes to multiple (at least 2) RPC vendors, the responses are compared, if they are the same then it's successful, if not then there is an error. Maybe this is a field you can select for a given endpoint

Brendan Coughlan About 1 year ago
Completed
Public RPC Aggregator
Some setting in our erpc.yaml (like projects.networks.[i].public = true) which would automatically add all the public upstreams for our network, plus we could add our own custom ones on-top of that just with the normal way of specifying upstreams) By public upstreams, this refers to https://chainlist.org/ RPC endpoints. Chainlist is open source and has an extremely wide avalibility of RPCs for each chain, so it is a great choice for this. It would be great if the implementation of this could also come with some reasonable failsafe defaults to aggressively ignore ones that are constantly failing (many of these RPCs are bad, so we need to filter them out fast). Relevant conversation and context: https://t.me/erpc_cloud/449 (@WesleyCharlesBlake already started down this path) This would be absolutely HUGE for supporting multichain applications, as now developer no longer need to worry about specific RPC endpoints at all, and can add new chains to their app extremely easily.

v Over 1 year ago
Completed
Public RPC Aggregator
Some setting in our erpc.yaml (like projects.networks.[i].public = true) which would automatically add all the public upstreams for our network, plus we could add our own custom ones on-top of that just with the normal way of specifying upstreams) By public upstreams, this refers to https://chainlist.org/ RPC endpoints. Chainlist is open source and has an extremely wide avalibility of RPCs for each chain, so it is a great choice for this. It would be great if the implementation of this could also come with some reasonable failsafe defaults to aggressively ignore ones that are constantly failing (many of these RPCs are bad, so we need to filter them out fast). Relevant conversation and context: https://t.me/erpc_cloud/449 (@WesleyCharlesBlake already started down this path) This would be absolutely HUGE for supporting multichain applications, as now developer no longer need to worry about specific RPC endpoints at all, and can add new chains to their app extremely easily.

v Over 1 year ago
Completed
Add consensus and data integrity failsafe policy
User should be able to define number of in-consensus nodes for a particular method/request, for example: projects: - id: main networks: - architecture: evm evm: chainId: 1 failsafe: # ... consensus: # Total number of nodes that must be tried to fetch the data. # for example eth_getBlockByNumber to be fetch from 3 nodes. maxCount: 3 # Minimum required nodes to consider the response accurate. # For example at least 2 nodes must return same block info. # For methods like eth_blockNumber it can mean at least 2 responses are needed and then pick the highest value. minCount: 2

Aram Alipoor Over 1 year ago
Completed
Add consensus and data integrity failsafe policy
User should be able to define number of in-consensus nodes for a particular method/request, for example: projects: - id: main networks: - architecture: evm evm: chainId: 1 failsafe: # ... consensus: # Total number of nodes that must be tried to fetch the data. # for example eth_getBlockByNumber to be fetch from 3 nodes. maxCount: 3 # Minimum required nodes to consider the response accurate. # For example at least 2 nodes must return same block info. # For methods like eth_blockNumber it can mean at least 2 responses are needed and then pick the highest value. minCount: 2

Aram Alipoor Over 1 year ago
Completed
Extend caching config (TTL, Per-method policies, Multiple storage)
Would be awesome to have per project custom caching policies, with a config like this for example: ``` type CachingPolicy = { /// Type of caching policy (maybe it could be possible to have something like “block” and “reorg” type?) type: "duration", /// The json rpc method to cache method: string, /// Some regex params for which to ignore caching if present in the params, or maybe something like in the upstream with an ignore + allow properties ignoreForParams: string[], /// Custom properties for duration based caching duration?: { period: string } } ``` My use case would be around the pimlico endpoint pimlico_getUserOperationGasPrice , like a 10sec caching, to have faster response and reduce pimlico credits consumption if we got a user activity spike on our end. I think ti could also be useful for frontend applications to use a duration based caching for the eth_getBlockByNumber and params latest, with an ignore pattern like ^(?!latest$|earliest$).*$

Quentin Nivelais Over 1 year ago
Completed
Extend caching config (TTL, Per-method policies, Multiple storage)
Would be awesome to have per project custom caching policies, with a config like this for example: ``` type CachingPolicy = { /// Type of caching policy (maybe it could be possible to have something like “block” and “reorg” type?) type: "duration", /// The json rpc method to cache method: string, /// Some regex params for which to ignore caching if present in the params, or maybe something like in the upstream with an ignore + allow properties ignoreForParams: string[], /// Custom properties for duration based caching duration?: { period: string } } ``` My use case would be around the pimlico endpoint pimlico_getUserOperationGasPrice , like a 10sec caching, to have faster response and reduce pimlico credits consumption if we got a user activity spike on our end. I think ti could also be useful for frontend applications to use a duration based caching for the eth_getBlockByNumber and params latest, with an ignore pattern like ^(?!latest$|earliest$).*$

Quentin Nivelais Over 1 year ago
Completed
Add Pimlico transport
Same as the custom Alchemy and Envio custom evm types, would be awesome to have this for pimlico (since it support a lot of chains) , and has a “multichain” api url: https://api.pimlico.io/v2/{chainId}/rpc?apikey=[YOUR_API_KEY_HERE]

Quentin Nivelais Over 1 year ago
Completed
Add Pimlico transport
Same as the custom Alchemy and Envio custom evm types, would be awesome to have this for pimlico (since it support a lot of chains) , and has a “multichain” api url: https://api.pimlico.io/v2/{chainId}/rpc?apikey=[YOUR_API_KEY_HERE]

Quentin Nivelais Over 1 year ago
Live (or prompted) reload of configuration settings
Similar to the ‘Web console for config’ request, it would be helpful if changes to the configuration could be updated within the running erpc daemon, similar to applications like nginx. When running through docker, having to restart the container each time something is changed is a bit disruptive.

DefiDebauchery 5 months ago
Live (or prompted) reload of configuration settings
Similar to the ‘Web console for config’ request, it would be helpful if changes to the configuration could be updated within the running erpc daemon, similar to applications like nginx. When running through docker, having to restart the container each time something is changed is a bit disruptive.

DefiDebauchery 5 months ago
Allow per-method failsafe policy definitions and reusable policy templates
Allow defining failsafe policy templates and introduce a ‘method’ field to have customized setup for different methods. For example get_blockNumber timeout and retry policy will be different than trace_debug* methods that are usually slower and more expensive. projects: - id: main networks: - architecture: evm evm: chainId: 42161 failsafe: my-network-policy upstreams: - id: blastapi-chain-42161 #... failsafe: my-upstream-policy failsafe: templates: - id: my-upstream-policy policies: - method: 'trace*' timeout: duration: 15s retry: maxCount: 2 delay: 2s - method: '*' timeout: duration: 1s retry: maxCount: 3 delay: 500ms #...

Aram Alipoor Over 1 year ago
Allow per-method failsafe policy definitions and reusable policy templates
Allow defining failsafe policy templates and introduce a ‘method’ field to have customized setup for different methods. For example get_blockNumber timeout and retry policy will be different than trace_debug* methods that are usually slower and more expensive. projects: - id: main networks: - architecture: evm evm: chainId: 42161 failsafe: my-network-policy upstreams: - id: blastapi-chain-42161 #... failsafe: my-upstream-policy failsafe: templates: - id: my-upstream-policy policies: - method: 'trace*' timeout: duration: 15s retry: maxCount: 2 delay: 2s - method: '*' timeout: duration: 1s retry: maxCount: 3 delay: 500ms #...

Aram Alipoor Over 1 year ago
Completed
Caching un-finalized data
Along with re-org tracking and invalidation Perhaps as an opt-in feature because it incurs upstream RPC costs

Aram Alipoor Over 1 year ago
Completed
Caching un-finalized data
Along with re-org tracking and invalidation Perhaps as an opt-in feature because it incurs upstream RPC costs

Aram Alipoor Over 1 year ago