Added write-up to Clustering for transaction classification notebook

pull/26/head
Colin Jarvis 2 years ago
parent b01900d5d9
commit 401f7c7ef0

@ -6,7 +6,7 @@
"source": [
"# Clustering for Transaction Classification\n",
"\n",
"In this notebook \n",
"This notebook covers use cases where your data is unlabelled but has features that can be used to cluster them into meaningful categories. The challenge with clustering is making the features that make those clusters stand out human-readable, and that is where we'll look to use GPT-3 to generate meaningful cluster descriptions for us. We can then use these to apply labels to a previously unlabelled dataset.\n",
"\n",
"To feed the model we use embeddings created using the approach displayed in the notebook [Multiclass classification for transactions Notebook](Multiclass_classification_for_transactions.ipynb), applied to the full 359 transactions in the dataset to give us a bigger pool for learning"
]
@ -45,7 +45,7 @@
"from helpers import OPENAI_API_KEY\n",
"\n",
"openai.api_key = OPENAI_API_KEY\n",
"COMPLETIONS_MODEL = \"text-alpha-002-latest\"\n",
"COMPLETIONS_MODEL = \"text-davinci-002\"\n",
"\n",
"# This path leads to a file with embeddings created in the notebook linked above\n",
"embedding_path = 'data/transactions_with_embeddings_359.csv'"
@ -57,9 +57,7 @@
"source": [
"## Clustering\n",
"\n",
"We'll now try something different - lets see if the model can discern useful categories of its own for us to use to sift through and explore our data. \n",
"\n",
"This clustering approach draws heavily on the [Clustering Notebook](Clustering.ipynb)."
"We'll reuse the approach from the [Clustering Notebook](Clustering.ipynb), using K-Means to cluster our dataset using the feature embeddings we created previously. We'll then use the Completions endpoint to generate cluster descriptions for us and judge their effectiveness"
]
},
{
@ -222,7 +220,7 @@
}
],
"source": [
"# Reading a review which belong to each group.\n",
"# We'll read 10 transactions per cluster as we're expecting some variation\n",
"transactions_per_cluster = 10\n",
"\n",
"for i in range(n_clusters):\n",
@ -237,7 +235,8 @@
" .values\n",
" )\n",
" response = openai.Completion.create(\n",
" engine=\"text-alpha-002-latest\",\n",
" engine=COMPLETIONS_MODEL,\n",
" # We'll include a prompt to instruct the model what sort of description we're looking for\n",
" prompt=f'''We want to group these transactions into meaningful clusters so we can target the areas we are spending the most money. \n",
" What do the following transactions have in common?\\n\\nTransactions:\\n\"\"\"\\n{transactions}\\n\"\"\"\\n\\nTheme:''',\n",
" temperature=0,\n",
@ -258,11 +257,13 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"### Conclusion\n",
"\n",
"We now have five new clusters that we can use to describe our data. Looking at the visualisation some of our clusters have some overlap and we'll need some tuning to get to the right place, but already we can see that GPT-3 has made some effective inferences. In particular, it picked up that items including legal deposits were related to literature archival, which is true but the model was given no clues on. Very cool, and with some tuning we can create a base set of clusters that we can then use with a multiclass classifier to generalise to other transactional datasets we might use."
]
}
],
"metadata": {

Loading…
Cancel
Save