With GPT-3 getting so many people interested in NLP, and with OpenAI's recently announced pricing plan putting it out of many people's reach, I thought it might be useful for some to see how easy it is to deploy your own GPT-2 API.
If you want to finetune GPT-2 on your own text (a la AI Dungeon), I'd suggest using gpt-2-simple and deploying with Cortex: https://github.com/minimaxir/gpt-2-simple
Lastly, by following this example, you can deploy your API locally (where inference will probably be slow, depending on your hardware, but will cost you $0) or to a cluster on AWS, which Cortex can spin up/manage for you.
This project uses a couple tools:
- Cortex: An open source model serving platform I help maintain. https://github.com/cortexlabs/cortex
- Hugging Face's Transformers: An open source library for using popular language models, like GPT-2. https://github.com/huggingface/transformers
This project uses a vanilla pre-trained GPT-2 and PyTorch. If you want to use TensorFlow/ONNX, that's supported as well ( https://github.com/cortexlabs/cortex/tree/master/examples/te... ).
If you want to finetune GPT-2 on your own text (a la AI Dungeon), I'd suggest using gpt-2-simple and deploying with Cortex: https://github.com/minimaxir/gpt-2-simple
Lastly, by following this example, you can deploy your API locally (where inference will probably be slow, depending on your hardware, but will cost you $0) or to a cluster on AWS, which Cortex can spin up/manage for you.