Would you like to react to this message? Create an account in a few clicks or log in to continue.

Go down
Posts : 19
Join date : 2022-07-18

Prompting Subnetwork, Validators, ChatTensor, Censorship  Empty Prompting Subnetwork, Validators, ChatTensor, Censorship

Fri Mar 31, 2023 11:56 am
This is a manual transcript of the latest TGIFT twitter spaces held by the Opentensor foundation. It has been edited and cleaned up in an attempt to provide a comprehensive and readable format. The full recording can be found here

I just want to say thank you thank you to everyone for for being here today. It's really exciting we've been working really hard trying to to build all the stuff that we're gonna be talking about today and we have a quick little surprise that's only going to be available during this call. So by being here and participating in this call you're going to be able to access some of the cutting edge that bittensor has to offer, but I think that before we go there I’d really like to start talking about why we started building this in the first place.

This goes all the way back to I think the first week of December, we were at the neural IPS conference (neural information processing systems conference). We met a lot of really great intelligent people there and got to see and meet people that were working on similar things to us and this was around the same time on that Tuesday that ChatGPT dropped. We were talking about it at the conference and some RL people and I were discussing that this probably wouldn't be all that hard to re-implement and make our own version of an open source version. Little did I know that there's a lot more involved than I initially realized and so we were able to make a model that was pretty good at being aligned within the first couple of weeks and this was a research project, just a re-implementation and we showed it to some some community members and they were really enthused and jazzed about it.

We tried to make it better and better and really design out some of the principles that are going to be a part of what bittensor is going forward, so it took a little bit longer than we initially wanted it to. Now we have chat.bittensor.com, an interface to these types of prompting models and because we have just released Finney which enables us to have sub-networks and delegated staking, the next step that we are taking is going to enable incredible businesses and applications to be built on top of bittensor. Finney was that first step.

Chattensor was a research project, trying to figure out what's the best practices and share that with the greater community and with Finney we've enabled delegated staking and delegated validators for businesses to come online and we've seen some businesses come online like for example what mog machine, Mr. Seeker, AI Zorro has been working on, tao station as well, there's so many and also runpod. There are  so many it would be hard to list them here. I don't want to leave anyone out but it's been incredible to see this growth come online.

With Chattensor the idea is that anybody who's operating a delegated validator is going to be able to spin up their own validator spin up a websocket and rest API and then be able to gate access to the bittensor network, whether it's through stripe API integrations where you're taking fed coin or United States dollars or British pounds or crypto and so on and so forth. However you want to take payments for the value that you're providing there which is providing access to the marketplace of machine intelligence.

It is your prerogative as a delegate validator and Chattensor is a demonstration of that, but we really want to take it one step further and we want to make this go interplanetary. As you know we have with the Nakamoto sub-network, the sub-network 3, an excellent mixtures of expert next token prediction model that you can get embeddings from. There's all these different types of synapses that you can get and what we've been working on and something we want to demonstrate today is what would ChatGPT or what would a prompting model like that where instead of it being predict the next token, it's user assistant in a messaging format. What type of performance and what type of network and what type of models could we include in a network like this. So for the remainder of this call we want to demonstrate what this prompting network can offer and what a delegated validator is going to be able to offer and what I like to think of as like a mini Commerce District inside of the Bittensor network.

I'm going to Tweet out this link out really quick this link is going to come down when we're finished and you do not have to delegate to be able to access this demo. All you need to do is log in with your polkadot.js wallet or your Talisman on your desktop. We also offer support on mobile with the Nova wallets.

Can I ask a question about to give the audience some more context? Can you explain exactly what makes this different from chattensor that we saw last week and what's so exciting about what we've done here with the demo so people can understand?

Yes, absolutely. Right now in the Bittensor network on sub-network 3 that is in the like logits or the tokens it's in a numerical representation that knowledge, but with the prompting network what we really wanted to do was validate just straight up text. So there's no tokenization involved or no de-tokenization, no applying forward passes to the logic processor or anything like this. We just wanted to evaluate text and so utilizing the research that we did with chattensor, we are using a reward model to validate different APIs on the network and different types of models, so theoretically a human could be a member of this network. They might not be able to respond fast enough but based off of prompt a human could type in the response or any type of language model. We're talking about a cohere API, if you plug in the cohere API that's going to work on the bittensor prompting sub-network. If you plug in anything that is a language model that produces text, that can be validated and earn rewards in the marketplace. That is bittensor, that's what we are building right now and that's what we're going to be demonstrating the prowess of.

So what we have here with with the demo is really an expansion of what we had last week with chattensor. Robert did an amazing job of producing a language model. We trained it in-house, it's our own model, we needed to do that to learn how these things worked really well and and obviously created an incredible job of that and what you guys have been playing around with was a model that was hosted on Bittensor, but there was no incentive mechanism for chat specific language understanding. We had build chattensor to be very general to work with unsupervised language models and that produce an output which was very abstract and a little bit difficult for most people to understand and apply in an application. Sort of more on the side of the ML Ops rather than the client-facing applications. What we've built here with the prompting network is an entire network where the outputs of these models are very interpretable. For instance what is the capital of Texas will return Austin that's what the miners are literally responding. What Robert was saying is that you could literally sit behind this endpoint and answer these questions directly. We don't care we're agnostic to the way in which this information is produced.

The demo that we're going to post in the chat below is literally connecting to a test network where this is running so you're querying chattensor, but chattensor is talking to about 10 models and selecting the outputs of those models as the prompt response. What's really cool here is that we have not just one model but a whole market of models that can plug themselves into this front end and attempt to maximize the rewards, so we hope that this will drive down the price and also improve and drive out the diversity of what we're seeing in a chat front end. The sky's the limit with what you can build and this is just a demo we've built in one day so hopefully people like it.

I just tweeted out the the demo link. All you need to do is log in if you're on desktop, log in with your Talisman or your polkadot.js. If you're on mobile we just started supporting mobile on both this demo and regular chattensor on chat.bittensor.com using the Nova wallet.

We are going to be appending to the FAQ very shortly, we just added this functionality maybe like an hour and a half ago. Feel free to throw some of your hardest problems at it. This is all running on a test network on bittensor right now and I think Jake said it really well; we are model agnostic, if you are producing intelligence that intelligence is encompassed inside the marketplace that is bittensor so if you are limited by say a specific API provider to a particular quota, you can use with bittensor because we have not only just redundancy but programmability built directly into these models. If you want to build an application and doing that with either having your own delegated validator or having an agreement with a delegated validator either through a subscription service or just delegating that TAO or paying in crypto or whatever you can build your application on top of bittensor off this prompting network. Specifically you can think of the prompting network as a commerce district inside of bittensor and you can build your application and not have to worry about the API going down. Right now we only have a few miners, a few UIDs in the network, but once we open this up here towards the end of April we're gonna have hundreds of UIDs all running their own variations of chattensor, whether it's self-serve or or self-hosted or you're using a blend of different APIs or whatever your exact prerogative is, that's going to be driven by the incentive of the Bittensor network.

There was somebody I follow on Twitter, Lewis, they are operating GPT Labs. That's a great example of someone who's had struggles with in his case OpenAIs API, where they just had so much demand but because they had to request access for more quota their application broke. With something like Bittensor and hundreds of UIDs, we have that redundancy and that ability in there such that you're not gonna run into that issue.

When we first set out to build bittensor, we thought the incentive mechanism is going to drive people to open up their compute and their intelligence and their data into the network and that's going to be the killer app. We're going to have the the most amount of compute, we're going to have the most amount of data and we have the most high quality models and indeed that is something that we drove over the last year, and it's been amazing to see but there was another dimension that we didn't consider when we were building bittensor at first which was the censorship resistance. Because we're agnostic to what the miners are doing and because there's this single entry point into this neural internet of endpoints, you can't censor that entry point as there is just too many to censor.

There's a large conversation right now going on in the AI industry about this memorandum, about censoring or stopping machine learning or or if you sign up for an API with a lot of these projects often they say here's your quota and who are you and what are you using it for. We have AI that's gated and this extra dimension that I'm talking about which I didn't foresee was the fact that we're building a gateless AI truly and because we're decentralized, we're run by a large number of people distributed across the globe you can't cut us all off. We're each like the head of a Hydra. If you want unstoppable applications that are built on artificial intelligence, I think that we're providing that and for the first time that's coming out with this new prompting Network in a way that is very expressible and understandable to the general audience. We've touched into the real world.

Something I've been discussing with some community members is that we've seen very recently the ChatGPT plugins where all you do is basically just give it a manifest like a schema dictating in natural English what you want it to do and then it writes the code and executes it for you it's a really interesting concept and what's so cool about this prompting network is that that will work out of the box. That is something that that is built in directly not built in directly to the prompt network but the functionality you could recreate that using this prompt network. Our goal is to encompass all of machine intelligence, whether that's images, audio, video, prompting, next token prediction and any other type of artificial intelligence that that comes out in the future is going to be available on top of Bittensor because Bittensor is the neural internet.

Jake said it said it really good yesterday, we're in the 1999 with the internet and right now it's really hard on Bittensor to to go and find the exact type of information that you're looking for because it's not exactly clear how you would do that because it’s just never done before. With what we're doing here with Bittensor, particularly the prompting network and delegated validators is that we're working on something very similar to like what Google did with pagerank. We're making it not only searchable but accessible, so not only is the type of delegated validators that you can build really interesting whether it's just a simple reward model and you're setting weights proportionally to the rewards, you normalize that you do all that that good stuff to it or if you want to make it super complex where you query every single UID for a few tokens and then have a learned model like an extra linear layer at the very end of the model that selects which tokens to use based off the responses that you got from the model.

There’s so many really awesome ways that you can harness this application, not harness this on the AI side but also harness this for business applications. Unstoppable ones that cannot be gated by people in their ivory towers who think that they know better than you. I mean it's one thing where we have something like liquid democracy like what we have here in Bittensor where I imagine if a validator did something like this where they were gating their knowledge behind because they knew best, you're gonna see inflows and outflows of the staking based off of how people feel about that particular validator. Better yet you don't even need to go through the validator, you can circumnavigate them and talk to the network directly. The validators are simply client applications to make it easier and more expressible, more user facing but the network is open to anyone. To developers out there who know how to access it, go in and use it. It is incentivized, there is an economic barrier for sure but that comes with the territory.

I like the analogy of indexing with Google and how the ability to index the web, their algorithm for doing that page rank which unlocks the need to curate information for the user that Yahoo had done and failed was the killer app for them. Simply an indexer, a ranking method. We're ranking the neural internet, that's what validators do and they provide front-ends for their users. It is slightly different, but analogies can only take you so far.

What Robert has built here with the demo is effectively an unstoppable application AI front end. It makes sense to talk a little bit about how this really sets down the path to build things on top of Bittensor for anybody not just for us or anybody else. We're building the tools so that you can plug in with these components. You can build your own chattensor application and we want to make that so easy that you can build it in in a day, if not with a single docker compose up. With your validator hot key you can run your own chattensor and that's what makes this really unique.

I'm still working very diligently on this but my idea is that let's say you're a delegated validator and you want to have your own application. Let's say you want to do something like what perplexity AI has been doing. They've been doing something really interesting where they combine the ability of I’m not sure if it's Bard or what API they're using, I'm pretty sure it's just a single one though. They're using OpenAI and then giving the ability to quote the sources on where it found that information. Let's say you want to build something like that, well what you're going to be able to get out of the box with what we're open sourcing once we launch the public prompting network that's incentivized is you'll just do docker compose up and you just have a config where you specify your hotkey. Then these different API keys, whether that's stripe API key, the coinbase pay API key, you can also build your own proprietary payment methods in there as well, because this is all just a part of the docker compose and then now what you have is a rest API that's gated behind that that has like a database for users with the ability to generate API keys and then you can also get a websocket or a rest API. You can have both running or just one and then you can build whatever type of front end you would like and boom now you have a business. That's like what a lot of these applications you've been seeing.

There are costs there as to being a delegated validator you can also accept money as well and so the idea is that you'll accept money more than you're spending and that difference there is going to allow you to hire people to make your product better and then you can start growing the amount of people using your service. That's a flywheel right there I mean just going right off and you have a business and there could literally be hundreds of businesses just like this on top of the Bittensor network.
Back to top
Permissions in this forum:
You cannot reply to topics in this forum