Commit
·
aabf140
1
Parent(s):
a7babf7
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,4 +6,38 @@ tags:
|
|
| 6 |
- testing
|
| 7 |
size_categories:
|
| 8 |
- n<1K
|
| 9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
- testing
|
| 7 |
size_categories:
|
| 8 |
- n<1K
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# Functional Test Cases
|
| 12 |
+
|
| 13 |
+
This is a _very_ small list of functional test cases that a team of software testers (QA) created for an example mobile app called Boop.
|
| 14 |
+
|
| 15 |
+
## Dataset
|
| 16 |
+
|
| 17 |
+
* Name: `Boop Test Cases.csv`
|
| 18 |
+
* Number of Rows: `136`
|
| 19 |
+
* Columns:
|
| 20 |
+
* `Test ID` (int)
|
| 21 |
+
* `Summary` (string)
|
| 22 |
+
* `Idea` (string)
|
| 23 |
+
* `Preconditions` (string)
|
| 24 |
+
* `Steps to reproduce` (string)
|
| 25 |
+
* `Expected Result` (string)
|
| 26 |
+
* `Actual Result` (string)
|
| 27 |
+
* `Pass/Fail` (string)
|
| 28 |
+
* `Bug #` (string)
|
| 29 |
+
* `Author` (string)
|
| 30 |
+
* `Area` (string)
|
| 31 |
+
|
| 32 |
+
> 💡 There are missing values. For example, not every test case had a related Bug
|
| 33 |
+
|
| 34 |
+
## Use Cases
|
| 35 |
+
|
| 36 |
+
Two common problems in Software Testing are:
|
| 37 |
+
|
| 38 |
+
* Duplicate test cases (and bug reports)
|
| 39 |
+
* Assigning issues to the correct team quickly (from internal sources, Customer or Tech Support, etc)
|
| 40 |
+
|
| 41 |
+
This dataset is probably too small to create an "Auto-Assigner" tool -- especially because almost half the tests are focused in the `Account` Area.
|
| 42 |
+
|
| 43 |
+
However, with embeddings, we could see if a new Test Case already exists by checking similarity 🤔
|