• vrek@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I actually find the opposite is true. I don’t mean it’s bad but you get much better results with some basic starting point.

    For example at my last company we had a database which for a VERY small example had a table with device serial, test type, test result(pass/fail), and an ID to another table. For each test in the other table there was a series of rows with that id which had all the details of test. For example unit 123 might be circuit board with 5 test points testing voltages at various points, all tested at one test station.

    So you would go into table 1, select all lines with serial 123 and test type “electrical test”, copy the test ID, go into table 2 and select all results for that ID.

    One day my boss sent me a list of 500 serials and told me to pull all the details and present it in a table.

    Doing that manually would be hours. So people with some sql knowledge might recognize you could use a sub query. Problem being the list sent to me was just a table copied and sent over teams. Would probably take atleast half an hour to copy that into ssms and correct all the formatting to be valid sql.

    I wrote a script that pulled the details for 1 serial using a sub query and pivot the results , copied that and the list of serials into chatgpt and asked it to modify the query to include all the serials in the table in correct sql format. It worked great( I got results for 500 unique serials and a test of a random 10 of them got the same results). It took maybe 5 minutes.

    Now trying to get chatgpt to do that from scratch would be painful but with some idea of structure of data, an idea of what I wanted to do and an example to follow it worked wonderfully.