diff --git a/docs/order acording to quato type.html b/docs/quato_type.html similarity index 99% rename from docs/order acording to quato type.html rename to docs/quato_type.html index 56da065..617da07 100644 --- a/docs/order acording to quato type.html +++ b/docs/quato_type.html @@ -1,2 +1,2 @@ -
A | B | C | D | E | F | |
---|---|---|---|---|---|---|
1 | Participant_id | session_no | code | related part (question) | quato type | content |
2 | 1 | 1 | SAA cluster visulization | Demo | Feedback | A label on these groups would be nice. You know, when you move away a bit, you can’t see the sign clearly. |
3 | 1 | 1 | SAA cluster visulization | Demo | Feedback | When you collapse it, the relationships become less visible, making it harder to understand. Single relationships are sufficient; multiple relationships are not necessary. |
4 | 1 | 1 | tool Implementation in GitHub | Demo | Feedback | When we look at it that way, the natural place for this is where the process is carried out, like where pull requests are managed, which is Git. It should be on Git. It needs to be a feature within Git. You would need an agent there, a reverse agent. |
5 | 1 | 1 | tool Implementation in GitHub | Demo | Feedback | You would need to place this within the GitHub pull request, as a separate cell. Like an add-on, we would need to place the box there. The outcome of the assessment should be directly suggested to the reviewer in GitHub, passing through the operational level. It should be placed on the drawing canvas. |
6 | 1 | 1 | dashboard for insights | Post user trial | Feedback | Is there any idea of a dashboard? For example, getting an insight from there. Normally, I can't have an opinion at the beginning. If there's not a specific issue I want to focus on, a dashboard can provide an insight. |
7 | 2 | 1 | SAA graph model adding multiple issue nodes | Post user trial question_1 | Feedback | I noticed something related to issue types. It mostly seemed to focus on bugs. If something could be done about the relationships between features, the development of features, and the bugs caused by these features, I think it would be very useful. For example, if we release a feature today and then suddenly our bug count increases by 5 or 10 the next day, there could be an issue. It would be nice to represent that here as well. That's the first thing that came to my mind. I thought this could be added while we were doing this. |
8 | 2 | 1 | SAA graph model adding multiple issue nodes | Post user trial question_1 | Feedback | As I understand it, it is already used as a property, but maybe thinking of it as a completely different artifact could be good for finding relationships |
9 | 1 | 1 | SAA software artifacts | Post user trial question_1 | Feedback | What is our sponsor? User story. Yes, it is necessary because without it we are working aimlessly, we can't do anything without it, we can't do anything on our own, and it also has to deliver a working code in the end. It can go as far as the build. This is the pipeline. |
10 | 4 | 1 | SAA graph model adding developer property | Post user trial question_1 | Feedback | Yes, it's not something that can be changed much, but there could be a difference. Here, we only thought of it as a developer. Because usually in a project, the developer is the one who opens, reports, and solves issues. But in a company structure, non-technical members can also open issues, leave comments, and maybe such a feature can be added to the developer. Like whether they are a technical developer or not. But I don't think it would be very different for open source, but it happens in a company environment. |
11 | 4 | 1 | SAA graph model adding developer property | Post user trial question_1 | Feedback | For example, we can deliberately exclude non-technical members in some queries. Or maybe I just want to find the non-technical member. This is the person I need to talk to about this topic. |
12 | 3 | 1 | dashboard for insights | Post user trial question_2 | Feedback | When I look at it, it would be great if it provided such services. For example, if there was a dashboard or any way to show the issues I encountered during the day... |
13 | 3 | 1 | dashboard for insights | Post user trial question_2 | Feedback | Exactly, it would be better if it showed me, for example, I had a problem with this issue today or this commit, and I could take action accordingly. As my colleague said, there are many details, and we usually go here when there's a problematic situation. We don't really know what problem occurred, who did it, why it happened, what happened. What issues were there in the issue, or maybe we can look at it from the other side. We can look from the developer's side as well. To better analyze the developer, to understand what problems they experienced, how to solve them, we can look from that side and get results. Such recommendation mechanisms would honestly be nice. |
14 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Feedback | It can be used there as well, like automatically adjusting the resolution, performing expand and collapse automatically. It’s also present in stories, but of course not that extremely... |
15 | 3 | 1 | SAA drawing canvas | Post user trial question_3 | Feedback | There is a special layer concept in Photoshop applications. For example, transferring the selected part to a layer, and when you hide that layer, it disappears. |
16 | 1 | 1 | dashboard for insights | User trial task 1 | Feedback | A dashboard could be useful there, providing direction. |
17 | 1 | 1 | software visualization complexity management | User trial task 1 | Feedback | I want my perception to be able to handle it with interest-focused visualization. And there's the zooming feature, where things expand as you zoom in. When you zoom out, groups, classes, or structures collapse, making it easier to manage perception. I think it needs to be optimized. |
18 | 3 | 1 | anomaly detection | Post trial question 3 | Feedback | We would also look at the anomaly detection parts from time to time, but it would be better if it reported to us. Because I can't keep checking it all the time. If it could report to me without spending time on it, I would look into the details. . |
19 | 3 | 1 | software artifact tracebility graph construction | Post trial question 3 | Feedback | This database could be evolved and developed at different points. For example, using a data processing tool like Pandas to evaluate the database, create different dashboards, show them, and maybe allow the user to do this. We write SQL here, but having our own language there, for instance, if I could do things with it, would be nice. It's just an idea, of course, very detailed and comprehensive topics that I can't fully detail, but such structures could be good. Of course. |
20 | 1 | 1 | user value and motivation | Post trial question 3 | Feedback | There's value here. We need to shape what the value means to whom and who will actually benefit from it. That's when we'll truly understand if it's valuable. Whether a finding is valuable or just remains a finding. If we look at it from a perspective-based view, it could be good for different industrial roles, in different areas, and according to the positions of different users. We need to go from the user story to the build and deploy stages. Creating insights for the product manager, the development team, developers, the test side, or the DevOps side. Because the same thing won't have the same value for everyone, but there are value sets you can present from different perspectives. If you present it from different perspectives, the benefit increases. |
21 | 1 | 1 | SAA percpective | Post trial question 3 | Feedback | : Instead of policing, it's naturally better to approach it from a health perspective. Terms like the health of the project or the health of the development are better. Otherwise, the terms finding and policy can be a bit intimidating and scary. Everyone would avoid that. Values like the number of commits, the number of changes, or man-hours usually embarrass people. |
22 | 7 | 2 | SAA incremental graph model | User trial task 1 | Feedback | The criticality of commits can also be added to the features. Is it something trivial, or has it solved a critical blocker issue, or just a normal problem? It would be great if it provided that information next to commits that solve difficult issues or if they were marked. |
23 | 6 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Feedback | Another feature could be selecting a specific set of code files and analyzing them together. Sometimes multiple files need to be examined together. Finding collective information about files that change together would be useful. |
24 | 6 | 2 | SAA file clustering | User trial task 1 | Feedback | I can suggest focusing on file groups instead of individual issues, clustering files that change together in the same PR. Five files might change in one PR, and four in another, but understanding they form a cluster is more practical. Focusing on these clusters makes more sense because a single file can be misleading. |
25 | 6 | 2 | SAA integrating new analysis | User trial task 1 | Feedback | Or you could consider an additional analysis. Who are the files associated with this file? Using traceability, you can find out who worked on files associated with this one. You can come up with additional analysis. Some things are routine activities, for example. When we add a new configuration, we need to write it in 50 files. If you forget, it fails. Linking and showing that would be very useful. Also, explaining how the system fails would be very helpful. |
26 | 8 | 2 | SAA incremental graph model | Post user trial question_1 | Feedback | However, the importance of issues, the size of files and commits, maybe some coefficients can be added. Otherwise, it already looks quite diverse I'm thinking about what I use in daily life, and I use these.. |
27 | 6 | 2 | SAA incremental graph model | Post user trial question_1 | Feedback | Also, for example, the review relationship, the total number of reviewed lines. Similarly, the review goes back and forth, how many times it turned. Because, for example, it's reviewed, then changed again, and reviewed again. Turn count. |
28 | 8 | 2 | SAA integrating new analysis | Post user trial question_1 | Feedback | Also, I can say something. In Jira, sometimes I accidentally close a task and reopen it 10 seconds later. An analysis to eliminate such situations from the data can be helpful. |
29 | 9 | 2 | SAA integrating new analysis | Post user trial question_1 | Feedback | Here, we talked about these artifacts, but it's the same in corporates. The CI in that PR is successful, or maybe there are multiple CIs. Sometimes a single successful build is sufficient, but there are actually 3 builds running. Things there could also be taken into consideration. Suggesting authors whose builds passed successfully, or suggesting authors of commits that were successfully released somewhere, could bring nice outcomes. From our example, for front-ends every 3 months, we might have a library. Because our product will be used by derivative products from us, but if it's not needed by default, we move it somewhere. Because it's not needed for my product but I'll prevent derivatives. If the other two passed, this one has also passed. So, the person who made the commit has written proper, good code. Suggesting those people would be better. |
30 | 9 | 2 | SAA graph model adding build artifacts | Post user trial question_1 | Feedback | CI runs, for example, if we have 3 builds, we accept if 1 passes, but this could be added for clarification. |
31 | 6 | 2 | SAA file clustering | Post user trial question_1 | Feedback | The clusters formed by the files I'm talking about. The files that change together. |
32 | 10 | 2 | SAA reviewer/expert recommendation analysis | Post user trial question_3 | Feedback | Commenting can also be included in finding experts. |
33 | 6 | 2 | SAA integrating new analysis | Post user trial question_2 | Feedback | It would actually be nice to look at what I've done in the past for myself. |
34 | 10 | 2 | SAA integrating new analysis | Post user trial question_2 | Feedback | Something could be added regarding whether teams actually fit the team structure you designed, based on code files. Maybe something like that. Clustering developers in that way. Like in our company, sometimes a team starts with 4 people, then grows to 8, and then they split. But how should they be split, or according to what concept should new work be structured? |
35 | 9 | 2 | SAA integrating new analysis | Post user trial question_2 | Feedback | An analysis could be added to find critical developers. Like, if these critical developers leave, what happens? |
36 | 6 | 2 | dashboard for insights | User trial task 1 | Feedback | I would just add something to the visualization. The graph display is nice, but you can't explain everything. Graphics are more like charts, reports. We need to add dashboards. |
37 | 1 | 1 | user value and motivation | Presentation | Inquiry | As a user, how would I use this? What value would I get? What is the motivation to perform analysis with this tool? |
38 | 1 | 1 | user value and motivation | Presentation | Inquiry | For developers or managers? |
39 | 1 | 1 | inquiry about user value and motivation | Demo | Inquiry | We are not at an operational level right now. So, what is our purpose for using historical data in this? |
40 | 1 | 1 | inquiry about reviewer recommendation value | Demo | Inquiry | We are currently working with existing data retrospectively. Our current approach is approximately regressive. Naturally, we are not at an operational level yet. This pull request may be open or closed and completed. So, what is the value of recommending a reviewer here? |
41 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Inquiry | Can we manually add nodes? By going in between. Sometimes it's interesting to see if there is involvement from a developer in a specific file or commit beyond just running queries. |
42 | 2 | 1 | limitations of visualization tools | pre_survey_4 | Negative | As one of the people who gave a 4, I think that although visualization is a very powerful tool, sometimes some abstractions and summaries we make there can cause us to miss some details. That's why I gave it a slightly lower score, considering that aspect. |
43 | 1 | 1 | limitations of visualization tools | pre_survey_4 | Negative | . Visualizer. Well, it can stay more in terms of inclusiveness. |
44 | 1 | 1 | external tool usage concern | Demo | Negative | Think of it this way, sir. I use GitHub, and I see this as a separate tool. If it's embedded within GitHub, the problem is nicely solved, but if it's separate, you're suggesting using two tools. |
45 | 1 | 1 | external tool usage concern | Demo | Negative | I usually want to keep it to a minimum. I don't want to go beyond one or two because I can't manage it, mentally as well. |
46 | 1 | 1 | SAA reviewer/expert recommendation analysis | Demo | Negative | On a separate note, relying on historical data can cause us to idolize the existing data. When the past repeats itself today, there must be a second person involved. Less qualified people should manually or rotationally handle this. |
47 | 1 | 1 | SAA reviewer/expert recommendation analysis | Demo | Negative | Score calculation is not a highly supported topic, actually. |
48 | 1 | 1 | SAA graph model adding build artifacts | Post user trial question_1 | Negative | Starting with a user story and providing delivery on the field, there is also the build aspect. In reality, it's the execution block. On the far left, there is an initiative, a job initiative, and on the far right, the actual working deployment, maybe not necessary to include, but there is the build. So, we have two gaps on the left and right sides. On the left side, as mentioned features, and on the right side, I think the build is missing. |
49 | 2 | 1 | SAA software artifacts | Post user trial question_1 | Negative | There's also this situation, I think it would provide more accurate information if deployments were used instead of commits. Because not every commit means something, but every deployment means a change for the product. Whether it's a different solution or a new feature addition. |
50 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Negative | the icons (node icons) are a bit too large, I'm saying this as a user. Maybe you can make it a bit more aesthetic. The node icons and such are too big. They try to draw attention to themselves. The icons stand out more than the context. But that's just a UI problem. |
51 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Negative | At some point, it feels like there are too many objects on the screen. For example, we clicked on something, something opened, we clicked on another thing, something else opened, and I might get lost there. So how should I not get lost? |
52 | 1 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Negative | , we're initially focused on a specific dataset, which works according to the criteria I set. Since our world is selective, and the tool works with a selective dataset, I'm essentially setting the initial parameters. My limitations, my potentially incorrect assumptions, are what it works with. The tool doesn't have the freedom to explore and discover everything on its own. Since it remains within the questions I have, it has to take over from me. Just like when you're not confident in the first result, here, because I set the boundaries, it inherently has that aspect. |
53 | 2 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Negative | Inevitably, because we're talking about statistical science and related aspects, I think there's a bias. In previous projects I worked on, the person responsible for the entire project in one day changed because we moved to a different repo, and all the work appeared to be done by one person. Such situations can erase the past, making it unreliable. |
54 | 3 | 1 | SAA Software Artifact Inspection from drawing canvas | Post trial question 3 | Negative | . And visually, even though there are many objects, it opens new doors as you dive in. It’s like a tree forming. At a certain point, we might lose track of it. |
55 | 2 | 1 | SAA software artifacts | Post trial question 3 | Negative | More specifically, probably the parts related to commits. I'm not sure how much it would benefit us because I'm generally opposed to measuring something in a project based on commits. |
56 | 8 | 2 | software visualization | pre_survey_4 | Negative | I might have rated this a 3. Visualization is important, but to a certain extent. It's not all about visuals; text-based information is also important. A combined version of both might provide a more reasonable experience. |
57 | 6 | 2 | anomaly detection | pre_survey_7 | Negative | It seems to me that these process anomalies are developed more on open-source data, so they don't map well to us. For example, we don't have a case where a bug is forgotten. Either we don't do it or we do it. This seems like a situation that could occur more within a community. |
58 | 7 | 2 | SAA complexity case | Demo | Negative | Also, in a normal project, there might not be so many artifacts. There are a lot of changes in the core artifacts, and everyone changes them there. That can explode there. |
59 | 9 | 2 | SAA drawing canvas | User trial task 1 | Negative | Yes, actually, if I could customize things in the tool, I could find things no one else can. But the set comes stable initially. For example, I could add things like low-priority issues or areas with fewer lines of code. I'm saying this to point out the shortcomings. |
60 | 6 | 2 | dashboard for insights | User trial task 1 | Negative | As I mentioned earlier, I gave it a 3. The graph isn't sufficient for me. |
61 | 1 | 1 | software visualization | pre_survey_3 | Neutral | The levels of visualization can vary greatly. |
62 | 1 | 1 | use of visualization tools | pre_survey_3 | Neutral | For example, with the Azure DevOps I am currently using, I use it for both work items and source control items. There are normal relational visuals, but we cannot fully understand the power and capabilities of the visualization being referred to here. So, yes, they do visualizations. You can track, navigate, and move from one item to another, following the work. |
63 | 2 | 1 | use of visualization tools | pre_survey_3 | Neutral | Previously, I worked at different companies, and during my time there, we heavily used visualization-based software primarily to see customer analytics in graphical form. Additionally, we used such software to examine logs of our applications, to see how they behaved, and to identify patterns, all through graphical means. |
64 | 3 | 1 | use of visualization tools | pre_survey_3 | Neutral | Well, we also have tools that we use currently, but visualization... I don't remember if I said no or yes, but I probably said no. We mostly use text-based tools, but as my colleague mentioned, we do have a visual tool for tracking logs. So, thinking about it that way, I might say we use it too. We also have various tools we use for database design and designs. If those are considered in this category, we might say we use them too, but mostly we work with text-based tools, I think. |
65 | 1 | 1 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | I was referring to tools like SonarQube or those built into the development environment. They are not directly integrated with all changes but can reside separately, like graphics capabilities that can vary. They can compare separately. Or applications like SonarQube can reveal other things outside of the integrated structure. |
66 | 1 | 1 | SAA graph model adding multiple issue nodes | Post user trial question_1 | Neutral | There is a hierarchy of types. A feature is a larger task, and under that, there are smaller tasks that contribute to it. |
67 | 1 | 1 | SAA software artifacts | Post user trial question_1 | Neutral | Actually, our main principle is this, we don't develop just for the sake of development. Our main concern is not finding bugs or writing code, our main concern is to create a working product, and the main sequence of the working product is indeed an executable output of a business story. The activity in the middle is artificial, it’s our development engineer's own problems. The business itself has no concern or relation to this. We found bugs, wrote code, committed them, etc., those are our issues, not a business issue. The business issue is where is my needed working product. The section on the board is our software development's own problems. The business world has no counterpart to it. |
68 | 1 | 1 | SAA integrating new analysis | Post user trial question_2 | Neutral | Once there's an issue side, like adding a user story related to issues, that part also expands. I think it's on the right. Focusing on the errors that come from there is also revealed. This way, the relationship between the user story and the errors starts to be discovered. |
69 | 5 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Neutral | I chose Peter because he was one of the last committers. |
70 | 2 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Neutral | I think I found the wrong result, but Michael had quite a lot of commits. I saw the number of commits, so I chose him. |
71 | 1 | 1 | software visualization complexity management | User trial task 1 | Neutral | Despite the beauty of visuals, it actually started in different places before. Maybe you've encountered it a few times as well. Normally, summarizing everything visually seems great, but suddenly realizing the vast size of the dataset we're working with makes us need to narrow it down again. For instance, when you first lay out all the projects in the trace, you can't understand anything. It's impossible because such a large structure emerges. Here, for instance, there were 7,000 nodes and 23,000 relationships. In a project, you easily reach that number within a year. By the third or even the second year, the numbers become very large. You need to transform it into a structure focused on interest. |
72 | 1 | 1 | software visualization complexity management | User trial task 1 | Neutral | Actually, the issue is focused visualization, something that both human perception can handle and can be engaging. I'm saying it's beautiful because if you put everything together, it becomes an incomprehensible structure. |
73 | 9 | 2 | use of visualization tools | pre_survey_3 | Neutral | For example, with the Azure DevOps I am currently using, I use it for both work items and source control items. There are normal relational visuals, but we cannot fully understand the power and capabilities of the visualization being referred to here. So, yes, they do visualizations. You can track, navigate, and move from one item to another, following the work. |
74 | 7 | 2 | use of visualization tools | pre_survey_3 | Neutral | Not much visualization, but we were using SonarQube for code analysis. Apart from that, tracking code developments with Jira is not very easy because you can't know what will come up in the code. There's no visual aspect. Just for code analysis. |
75 | 6 | 2 | use of visualization tools | pre_survey_3 | Neutral | Actually, we used different tools for different purposes. We also used SonarQube, for static code analysis. Similarly, we tried around 7-8 tools for measuring engineering productivity and working metrics of engineers. We started with one, then moved to another. This is our experience. If you ask what we use now, we use a tool called Swarmy. What does Swarmy do? It analyzes data related to Jira and presents various graphics on dashboards. It also generates metrics from its own CI/CD tools, providing a general organization. |
76 | 8 | 2 | use of visualization tools | pre_survey_3 | Neutral | We don't use it, but we use SonarQube, though I didn't think of it as very visual. It guides the code more. Also, we log all mentions on Github. We can see who did what through various graphs there. |
77 | 7 | 2 | SAA software artifacts | pre_survey_5 | Neutral | In the end, source code is always changing. For example, you draw the diagram initially, and it stays there. 90% of the time, it remains in its initial state, and no one updates it later/ |
78 | 8 | 2 | SAA software artifacts | pre_survey_5 | Neutral | I can talk about the less common ones, like UML. These are generally found in more established, corporate settings, I guess? They might exist in such companies. But in start-up environments, because companies need to be more profitable. For example, I haven't used it at all, unless we're counting databases, in the last 3-4 years, not even once. A decision is made in a meeting and that's it. That document is lost and gone forever. |
79 | 7 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | These things usually don't come up. You see who is an expert from the code they write. You give them a task, they solve it, and you think, "Yes, this is good," and you give them more tasks. |
80 | 8 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | If a team is established, everyone knows each other after a while. It's like that. We don't have a tool, but everyone has an idea in their heads, and it's the same with us. |
81 | 9 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | It's the same with us. If there's an issue with a service, everyone knows who understands it better. There's a mental mapping, but we don't use a tool to identify it. |
82 | 10 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | But Git blame can sometimes give misleading results. An issue can be assigned to the wrong person. |
83 | 7 | 2 | anomaly detection | pre_survey_7 | Neutral | We don't have such a tool, but after an incident, we usually hold a meeting. We discuss and identify process smells there. |
84 | 9 | 2 | anomaly detection | pre_survey_7 | Neutral | We have some metrics. If a 1,000-line code review document is approved in 2.5 hours, why? Or if a 1,000-line code is approved in 2 minutes, you can't read 1,000 lines in 2 minutes. We do this by sampling. We get documents and ask the reviewers if the PR was closed too quickly. They might say it was a configuration change, so they knew about it. It's a process we follow, not tool-based. These metrics are collected during the sprint, and anomalies are investigated. |
85 | 9 | 2 | anomaly detection | pre_survey_7 | Neutral | There are metrics like time, code lines, and certain people who need to review according to the repo. These are related to the code review process. We also have corporate processes related to the bug tracking process. Maybe that applies here too? I'm not sure. A bug is not assigned immediately to someone. It first goes to the technical manager of the relevant module, who analyzes it and then directs some to others, sends some to the board, etc. It's already a corporate process. |
86 | 1 | 1 | software visualization | pre_survey_4 | Positive | It's natural, I think it's an evolutionary situation. Our visual intelligence is evolutionarily more advanced than our other senses. |
87 | 1 | 1 | software artifacts graph model inclusion | pre_survey_5 | Positive | Well, according to your Graph model, you already encompass all four. There are files, issues, commits. Pull requests are already artificial artifacts in between, so I might be completing it for that reason. |
88 | 4 | 1 | external tool usage concern | Demo | Positive | When I first started as a junior developer, I didn't have enough experience to find a reviewer. I was so desperate that using another tool wouldn't have bothered me that much. |
89 | 3 | 1 | SAA reviewer/expert recommendation analysis | Demo | Positive | So, scoring who has been more involved with this file can make my job easier when assigning reviews. |
90 | 1 | 1 | SAA incremental graph model | Post user trial question_1 | Positive | Of course, it can be added. (new artifacts) |
91 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Positive | It's a nice, commonly used canvas |
92 | 2 | 1 | SAA Software Artifact Inspection from drawing canvas | Post user trial question_3 | Positive | I think the visual representation and the things done here have a benefit for companies. We always try to find measurable things about people's performances or what they do. Providing such a history and showing progress over time, for instance, showing that Hans had issues with certain commits and bugs, but a year later, he improved. Putting such things into measurable terms is really valuable. This is something we always lack. |
93 | 2 | 1 | software visualization | Post user trial question_3 | Positive | Seeing this visually is really important as it provides evidence. Usually, these things are done based on gut feelings, or you look at who closed the most issues, but here you can see much more detailed and valuable insights. |
94 | 1 | 1 | SAA integrating new analysis | None | Positive | There could be another area where you can create value. Especially if you link this to builds or deployments. What is basically included in this build, what different things were committed, based on pull requests and user stories. Similar things are done in some aspects, if you remember. You are visualizing that. There can be some value derived from that. |
95 | 1 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | It’s a somewhat obvious question because one involves human effort and potential errors, while the other is automatic. It's a self-confirming thing. |
96 | 4 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | Also, when looking at GitHub, I sometimes feel like I might have missed something. I know I'm guessing, but here, since I get a suggestion, it feels a bit more reliable. With GitHub, I worry if I missed something. |
97 | 1 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | This also feels complex to me (Github Analysis). At some point, the file size increases and it becomes complex, exceeding human limits. |
98 | 1 | 1 | software visualization | User trial task 1 | Positive | Visualization, as you initially said, is a stronger area for our perception. Compared to verbal or written structures, visual structure is much better. I think we first have visual then written perception. Visuals inherently have high value. |
99 | 4 | 1 | anomaly detection | User trial task 2 | Positive | Well, JQL is generally very difficult, not user-friendly at all. So personally, I liked having this as an alternative. |
100 | 3 | 1 | SAA reviewer/expert recommendation analysis | Post trial question 3 | Positive | I would definitely use the scoring part (expert, reviewer part). That's one of my favorite areas. I think it could be further developed. |
101 | 3 | 1 | SAA Software Artifact Inspection from drawing canvas | Post trial question 3 | Positive | Some of our tasks are very detailed, but at some point, it would really make our job easier. "Oh, really? What happened next? Who reviewed it? What issues were there?" Visually, it would be really useful, and we would probably use it. |
102 | 3 | 1 | software artifact tracebility graph construction | Post trial question 3 | Positive | but I think our strongest point, which might not have been discussed here, is being able to take all this data, create a database, and visualize it. It's a really nice feature. |
103 | | 1 | anomaly detection | Post trial question 3 | Positive | For me, the part that caught my attention was the anomaly detection section. I think the anomaly section is very valuable. I would definitely use that. |
104 | 2 | 1 | SAA integrating new analysis | Post trial question 3 | Positive | . I think a world where it's easier to integrate new things would be better. |
105 | 6 | 2 | CASE tool selection | pre_survet1-2 | Positive | The community already supports it (Github and Jira). More importantly, when you use these two products, you can easily use other tools as well. For example, if you want to use any analytical platform, they all support these two. So, you don't need another tool. |
106 | 6 | 2 | software visualization | pre_survey_4 | Positive | Absolutely, it's better to have visual support than walking blind. Metrics are everywhere, in logs and other places, but being able to see them at a glance, to say "Ah, there's a problem here," is very important. |
107 | 9 | 2 | software visualization | pre_survey_4 | Positive | I thought visualizing could be useful to quickly catch patterns at a glance. For example, bugs always assigned to the same person or a bug constantly pointing to the same source code. It's much faster to see this visually rather than in text. So, it indicates there might be an issue with this source code or this file. It helps to catch the pattern at a glance. But of course, it needs to go into details. I might not be able to comment on the details, but I think it helps to catch the correct pattern at first glance. |
108 | 9 | 2 | SAA percpective | Demo | Positive | If I take this to our company, it would be actively used from the first day. For our product, I'm saying. We are writing a product, and since we have a team of 70 people, it works well. Because there are teams separated by domain. When one domain writes an integration for another, they know their own team well but may lack information on who to ask in the other team. Of course, all of this can be solved through communication, but with so much remote work nowadays, you have to get up and ask the person next to you. Instead, you write it in the metrics and everyone sees it. I would make them buy this tool. |
109 | 9 | 2 | SAA percpective | Demo | Positive | Maybe others will agree about corporates. In corporates, it's really hard to measure someone's performance. In startups or places where the entrepreneur provides the capital, they already closely monitor the person. Or there are fewer people, so it's quickly visible. There was a law called social accountability or something like that. In a team of 50, there are 5 people who don't work, and it's very hard to find and prove that, and there's no penalty according to labor laws. If we had a tool that says, "You keep getting bugs in your code, the analysis tool says so," we could have a one-to-one meeting and after 3-5 meetings, say, "We have to part ways because this tool shows your performance isn't good." We actually care a lot about these metrics for that reason. If you look at 1000 lines of code in 2 minutes, there must be an explanation, or it means you just looked at it superficially. So, for companies like ours, corporates might benefit more. |
110 | 10 | 2 | SAA reviewer/expert recommendation analysis | Demo | Positive | I agree with this usage. In teams, from a developer's perspective, there are 6, 7, 8 people. Within the team, I know who wrote what. Sometimes we need to go to a point touched by another team. I don't know who wrote what in the other team, but by looking at this tool, I can understand who knows what. That seems quite advantageous to me. |
111 | 8 | 2 | SAA incremental graph model | Post user trial question_1 | Positive | I think the relations are quite comprehensive and sufficient. |
112 | 9 | 2 | SAA drawing canvas | Post user trial question_3 | Positive | I would use it too, it would be useful for me. I'm in BevOps, for example. When a problem arises somewhere, I can quickly go and find the person responsible without asking anyone. "Why are we breaking these?" I often say. |
113 | 7 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | Well, I wrote what the tool suggested. The tool's suggestion made sense to me. |
114 | 6 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | This actually shows that it was a difficult question. It indicates that it can't be solved at a glance and that tool support is required. |
115 | 6 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | Searching manually is more difficult, of course. The tool directly shows the results and reasons. |
116 | 7 | 2 | SAA value | Post trial question 3 | Positive | We could find the person who works the most on the project. Let's not miss this. We can say, "This person is good." It guides us. |
117 | 9 | 2 | software visualization | Post trial question 3 | Positive | Many tools don't provide visual communication. And I can navigate and see, I can examine relationally. It's useful because it's multifaceted. Actually, in our tools, we can say something according to something. But if there is such data in an issue area and the developer did such a job, it allows us to have multifaceted data. |
118 | 10 | 2 | SAA graph layout | Post trial question 3 | Positive | The expandability of the graph is very good, I think. I first... |
A | B | C | D | E | F | |
---|---|---|---|---|---|---|
1 | Participant_id | session_no | code | related part (question) | quato type | content |
2 | 1 | 1 | SAA cluster visulization | Demo | Feedback | A label on these groups would be nice. You know, when you move away a bit, you can’t see the sign clearly. |
3 | 1 | 1 | SAA cluster visulization | Demo | Feedback | When you collapse it, the relationships become less visible, making it harder to understand. Single relationships are sufficient; multiple relationships are not necessary. |
4 | 1 | 1 | tool Implementation in GitHub | Demo | Feedback | When we look at it that way, the natural place for this is where the process is carried out, like where pull requests are managed, which is Git. It should be on Git. It needs to be a feature within Git. You would need an agent there, a reverse agent. |
5 | 1 | 1 | tool Implementation in GitHub | Demo | Feedback | You would need to place this within the GitHub pull request, as a separate cell. Like an add-on, we would need to place the box there. The outcome of the assessment should be directly suggested to the reviewer in GitHub, passing through the operational level. It should be placed on the drawing canvas. |
6 | 1 | 1 | dashboard for insights | Post user trial | Feedback | Is there any idea of a dashboard? For example, getting an insight from there. Normally, I can't have an opinion at the beginning. If there's not a specific issue I want to focus on, a dashboard can provide an insight. |
7 | 2 | 1 | SAA graph model adding multiple issue nodes | Post user trial question_1 | Feedback | I noticed something related to issue types. It mostly seemed to focus on bugs. If something could be done about the relationships between features, the development of features, and the bugs caused by these features, I think it would be very useful. For example, if we release a feature today and then suddenly our bug count increases by 5 or 10 the next day, there could be an issue. It would be nice to represent that here as well. That's the first thing that came to my mind. I thought this could be added while we were doing this. |
8 | 2 | 1 | SAA graph model adding multiple issue nodes | Post user trial question_1 | Feedback | As I understand it, it is already used as a property, but maybe thinking of it as a completely different artifact could be good for finding relationships |
9 | 1 | 1 | SAA software artifacts | Post user trial question_1 | Feedback | What is our sponsor? User story. Yes, it is necessary because without it we are working aimlessly, we can't do anything without it, we can't do anything on our own, and it also has to deliver a working code in the end. It can go as far as the build. This is the pipeline. |
10 | 4 | 1 | SAA graph model adding developer property | Post user trial question_1 | Feedback | Yes, it's not something that can be changed much, but there could be a difference. Here, we only thought of it as a developer. Because usually in a project, the developer is the one who opens, reports, and solves issues. But in a company structure, non-technical members can also open issues, leave comments, and maybe such a feature can be added to the developer. Like whether they are a technical developer or not. But I don't think it would be very different for open source, but it happens in a company environment. |
11 | 4 | 1 | SAA graph model adding developer property | Post user trial question_1 | Feedback | For example, we can deliberately exclude non-technical members in some queries. Or maybe I just want to find the non-technical member. This is the person I need to talk to about this topic. |
12 | 3 | 1 | dashboard for insights | Post user trial question_2 | Feedback | When I look at it, it would be great if it provided such services. For example, if there was a dashboard or any way to show the issues I encountered during the day... |
13 | 3 | 1 | dashboard for insights | Post user trial question_2 | Feedback | Exactly, it would be better if it showed me, for example, I had a problem with this issue today or this commit, and I could take action accordingly. As my colleague said, there are many details, and we usually go here when there's a problematic situation. We don't really know what problem occurred, who did it, why it happened, what happened. What issues were there in the issue, or maybe we can look at it from the other side. We can look from the developer's side as well. To better analyze the developer, to understand what problems they experienced, how to solve them, we can look from that side and get results. Such recommendation mechanisms would honestly be nice. |
14 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Feedback | It can be used there as well, like automatically adjusting the resolution, performing expand and collapse automatically. It’s also present in stories, but of course not that extremely... |
15 | 3 | 1 | SAA drawing canvas | Post user trial question_3 | Feedback | There is a special layer concept in Photoshop applications. For example, transferring the selected part to a layer, and when you hide that layer, it disappears. |
16 | 1 | 1 | dashboard for insights | User trial task 1 | Feedback | A dashboard could be useful there, providing direction. |
17 | 1 | 1 | software visualization complexity management | User trial task 1 | Feedback | I want my perception to be able to handle it with interest-focused visualization. And there's the zooming feature, where things expand as you zoom in. When you zoom out, groups, classes, or structures collapse, making it easier to manage perception. I think it needs to be optimized. |
18 | 3 | 1 | anomaly detection | Post trial question 3 | Feedback | We would also look at the anomaly detection parts from time to time, but it would be better if it reported to us. Because I can't keep checking it all the time. If it could report to me without spending time on it, I would look into the details. . |
19 | 3 | 1 | software artifact tracebility graph construction | Post trial question 3 | Feedback | This database could be evolved and developed at different points. For example, using a data processing tool like Pandas to evaluate the database, create different dashboards, show them, and maybe allow the user to do this. We write SQL here, but having our own language there, for instance, if I could do things with it, would be nice. It's just an idea, of course, very detailed and comprehensive topics that I can't fully detail, but such structures could be good. Of course. |
20 | 1 | 1 | user value and motivation | Post trial question 3 | Feedback | There's value here. We need to shape what the value means to whom and who will actually benefit from it. That's when we'll truly understand if it's valuable. Whether a finding is valuable or just remains a finding. If we look at it from a perspective-based view, it could be good for different industrial roles, in different areas, and according to the positions of different users. We need to go from the user story to the build and deploy stages. Creating insights for the product manager, the development team, developers, the test side, or the DevOps side. Because the same thing won't have the same value for everyone, but there are value sets you can present from different perspectives. If you present it from different perspectives, the benefit increases. |
21 | 1 | 1 | SAA percpective | Post trial question 3 | Feedback | : Instead of policing, it's naturally better to approach it from a health perspective. Terms like the health of the project or the health of the development are better. Otherwise, the terms finding and policy can be a bit intimidating and scary. Everyone would avoid that. Values like the number of commits, the number of changes, or man-hours usually embarrass people. |
22 | 7 | 2 | SAA incremental graph model | User trial task 1 | Feedback | The criticality of commits can also be added to the features. Is it something trivial, or has it solved a critical blocker issue, or just a normal problem? It would be great if it provided that information next to commits that solve difficult issues or if they were marked. |
23 | 6 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Feedback | Another feature could be selecting a specific set of code files and analyzing them together. Sometimes multiple files need to be examined together. Finding collective information about files that change together would be useful. |
24 | 6 | 2 | SAA file clustering | User trial task 1 | Feedback | I can suggest focusing on file groups instead of individual issues, clustering files that change together in the same PR. Five files might change in one PR, and four in another, but understanding they form a cluster is more practical. Focusing on these clusters makes more sense because a single file can be misleading. |
25 | 6 | 2 | SAA integrating new analysis | User trial task 1 | Feedback | Or you could consider an additional analysis. Who are the files associated with this file? Using traceability, you can find out who worked on files associated with this one. You can come up with additional analysis. Some things are routine activities, for example. When we add a new configuration, we need to write it in 50 files. If you forget, it fails. Linking and showing that would be very useful. Also, explaining how the system fails would be very helpful. |
26 | 8 | 2 | SAA incremental graph model | Post user trial question_1 | Feedback | However, the importance of issues, the size of files and commits, maybe some coefficients can be added. Otherwise, it already looks quite diverse I'm thinking about what I use in daily life, and I use these.. |
27 | 6 | 2 | SAA incremental graph model | Post user trial question_1 | Feedback | Also, for example, the review relationship, the total number of reviewed lines. Similarly, the review goes back and forth, how many times it turned. Because, for example, it's reviewed, then changed again, and reviewed again. Turn count. |
28 | 8 | 2 | SAA integrating new analysis | Post user trial question_1 | Feedback | Also, I can say something. In Jira, sometimes I accidentally close a task and reopen it 10 seconds later. An analysis to eliminate such situations from the data can be helpful. |
29 | 9 | 2 | SAA integrating new analysis | Post user trial question_1 | Feedback | Here, we talked about these artifacts, but it's the same in corporates. The CI in that PR is successful, or maybe there are multiple CIs. Sometimes a single successful build is sufficient, but there are actually 3 builds running. Things there could also be taken into consideration. Suggesting authors whose builds passed successfully, or suggesting authors of commits that were successfully released somewhere, could bring nice outcomes. From our example, for front-ends every 3 months, we might have a library. Because our product will be used by derivative products from us, but if it's not needed by default, we move it somewhere. Because it's not needed for my product but I'll prevent derivatives. If the other two passed, this one has also passed. So, the person who made the commit has written proper, good code. Suggesting those people would be better. |
30 | 9 | 2 | SAA graph model adding build artifacts | Post user trial question_1 | Feedback | CI runs, for example, if we have 3 builds, we accept if 1 passes, but this could be added for clarification. |
31 | 6 | 2 | SAA file clustering | Post user trial question_1 | Feedback | The clusters formed by the files I'm talking about. The files that change together. |
32 | 10 | 2 | SAA reviewer/expert recommendation analysis | Post user trial question_3 | Feedback | Commenting can also be included in finding experts. |
33 | 6 | 2 | SAA integrating new analysis | Post user trial question_2 | Feedback | It would actually be nice to look at what I've done in the past for myself. |
34 | 10 | 2 | SAA integrating new analysis | Post user trial question_2 | Feedback | Something could be added regarding whether teams actually fit the team structure you designed, based on code files. Maybe something like that. Clustering developers in that way. Like in our company, sometimes a team starts with 4 people, then grows to 8, and then they split. But how should they be split, or according to what concept should new work be structured? |
35 | 9 | 2 | SAA integrating new analysis | Post user trial question_2 | Feedback | An analysis could be added to find critical developers. Like, if these critical developers leave, what happens? |
36 | 6 | 2 | dashboard for insights | User trial task 1 | Feedback | I would just add something to the visualization. The graph display is nice, but you can't explain everything. Graphics are more like charts, reports. We need to add dashboards. |
37 | 1 | 1 | user value and motivation | Presentation | Inquiry | As a user, how would I use this? What value would I get? What is the motivation to perform analysis with this tool? |
38 | 1 | 1 | user value and motivation | Presentation | Inquiry | For developers or managers? |
39 | 1 | 1 | inquiry about user value and motivation | Demo | Inquiry | We are not at an operational level right now. So, what is our purpose for using historical data in this? |
40 | 1 | 1 | inquiry about reviewer recommendation value | Demo | Inquiry | We are currently working with existing data retrospectively. Our current approach is approximately regressive. Naturally, we are not at an operational level yet. This pull request may be open or closed and completed. So, what is the value of recommending a reviewer here? |
41 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Inquiry | Can we manually add nodes? By going in between. Sometimes it's interesting to see if there is involvement from a developer in a specific file or commit beyond just running queries. |
42 | 2 | 1 | limitations of visualization tools | pre_survey_4 | Negative | As one of the people who gave a 4, I think that although visualization is a very powerful tool, sometimes some abstractions and summaries we make there can cause us to miss some details. That's why I gave it a slightly lower score, considering that aspect. |
43 | 1 | 1 | limitations of visualization tools | pre_survey_4 | Negative | . Visualizer. Well, it can stay more in terms of inclusiveness. |
44 | 1 | 1 | external tool usage concern | Demo | Negative | Think of it this way, sir. I use GitHub, and I see this as a separate tool. If it's embedded within GitHub, the problem is nicely solved, but if it's separate, you're suggesting using two tools. |
45 | 1 | 1 | external tool usage concern | Demo | Negative | I usually want to keep it to a minimum. I don't want to go beyond one or two because I can't manage it, mentally as well. |
46 | 1 | 1 | SAA reviewer/expert recommendation analysis | Demo | Negative | On a separate note, relying on historical data can cause us to idolize the existing data. When the past repeats itself today, there must be a second person involved. Less qualified people should manually or rotationally handle this. |
47 | 1 | 1 | SAA reviewer/expert recommendation analysis | Demo | Negative | Score calculation is not a highly supported topic, actually. |
48 | 1 | 1 | SAA graph model adding build artifacts | Post user trial question_1 | Negative | Starting with a user story and providing delivery on the field, there is also the build aspect. In reality, it's the execution block. On the far left, there is an initiative, a job initiative, and on the far right, the actual working deployment, maybe not necessary to include, but there is the build. So, we have two gaps on the left and right sides. On the left side, as mentioned features, and on the right side, I think the build is missing. |
49 | 2 | 1 | SAA software artifacts | Post user trial question_1 | Negative | There's also this situation, I think it would provide more accurate information if deployments were used instead of commits. Because not every commit means something, but every deployment means a change for the product. Whether it's a different solution or a new feature addition. |
50 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Negative | the icons (node icons) are a bit too large, I'm saying this as a user. Maybe you can make it a bit more aesthetic. The node icons and such are too big. They try to draw attention to themselves. The icons stand out more than the context. But that's just a UI problem. |
51 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Negative | At some point, it feels like there are too many objects on the screen. For example, we clicked on something, something opened, we clicked on another thing, something else opened, and I might get lost there. So how should I not get lost? |
52 | 1 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Negative | , we're initially focused on a specific dataset, which works according to the criteria I set. Since our world is selective, and the tool works with a selective dataset, I'm essentially setting the initial parameters. My limitations, my potentially incorrect assumptions, are what it works with. The tool doesn't have the freedom to explore and discover everything on its own. Since it remains within the questions I have, it has to take over from me. Just like when you're not confident in the first result, here, because I set the boundaries, it inherently has that aspect. |
53 | 2 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Negative | Inevitably, because we're talking about statistical science and related aspects, I think there's a bias. In previous projects I worked on, the person responsible for the entire project in one day changed because we moved to a different repo, and all the work appeared to be done by one person. Such situations can erase the past, making it unreliable. |
54 | 3 | 1 | SAA Software Artifact Inspection from drawing canvas | Post trial question 3 | Negative | . And visually, even though there are many objects, it opens new doors as you dive in. It’s like a tree forming. At a certain point, we might lose track of it. |
55 | 2 | 1 | SAA software artifacts | Post trial question 3 | Negative | More specifically, probably the parts related to commits. I'm not sure how much it would benefit us because I'm generally opposed to measuring something in a project based on commits. |
56 | 8 | 2 | software visualization | pre_survey_4 | Negative | I might have rated this a 3. Visualization is important, but to a certain extent. It's not all about visuals; text-based information is also important. A combined version of both might provide a more reasonable experience. |
57 | 6 | 2 | anomaly detection | pre_survey_7 | Negative | It seems to me that these process anomalies are developed more on open-source data, so they don't map well to us. For example, we don't have a case where a bug is forgotten. Either we don't do it or we do it. This seems like a situation that could occur more within a community. |
58 | 7 | 2 | SAA complexity case | Demo | Negative | Also, in a normal project, there might not be so many artifacts. There are a lot of changes in the core artifacts, and everyone changes them there. That can explode there. |
59 | 9 | 2 | SAA drawing canvas | User trial task 1 | Negative | Yes, actually, if I could customize things in the tool, I could find things no one else can. But the set comes stable initially. For example, I could add things like low-priority issues or areas with fewer lines of code. I'm saying this to point out the shortcomings. |
60 | 6 | 2 | dashboard for insights | User trial task 1 | Negative | As I mentioned earlier, I gave it a 3. The graph isn't sufficient for me. |
61 | 1 | 1 | software visualization | pre_survey_3 | Neutral | The levels of visualization can vary greatly. |
62 | 1 | 1 | use of visualization tools | pre_survey_3 | Neutral | For example, with the Azure DevOps I am currently using, I use it for both work items and source control items. There are normal relational visuals, but we cannot fully understand the power and capabilities of the visualization being referred to here. So, yes, they do visualizations. You can track, navigate, and move from one item to another, following the work. |
63 | 2 | 1 | use of visualization tools | pre_survey_3 | Neutral | Previously, I worked at different companies, and during my time there, we heavily used visualization-based software primarily to see customer analytics in graphical form. Additionally, we used such software to examine logs of our applications, to see how they behaved, and to identify patterns, all through graphical means. |
64 | 3 | 1 | use of visualization tools | pre_survey_3 | Neutral | Well, we also have tools that we use currently, but visualization... I don't remember if I said no or yes, but I probably said no. We mostly use text-based tools, but as my colleague mentioned, we do have a visual tool for tracking logs. So, thinking about it that way, I might say we use it too. We also have various tools we use for database design and designs. If those are considered in this category, we might say we use them too, but mostly we work with text-based tools, I think. |
65 | 1 | 1 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | I was referring to tools like SonarQube or those built into the development environment. They are not directly integrated with all changes but can reside separately, like graphics capabilities that can vary. They can compare separately. Or applications like SonarQube can reveal other things outside of the integrated structure. |
66 | 1 | 1 | SAA graph model adding multiple issue nodes | Post user trial question_1 | Neutral | There is a hierarchy of types. A feature is a larger task, and under that, there are smaller tasks that contribute to it. |
67 | 1 | 1 | SAA software artifacts | Post user trial question_1 | Neutral | Actually, our main principle is this, we don't develop just for the sake of development. Our main concern is not finding bugs or writing code, our main concern is to create a working product, and the main sequence of the working product is indeed an executable output of a business story. The activity in the middle is artificial, it’s our development engineer's own problems. The business itself has no concern or relation to this. We found bugs, wrote code, committed them, etc., those are our issues, not a business issue. The business issue is where is my needed working product. The section on the board is our software development's own problems. The business world has no counterpart to it. |
68 | 1 | 1 | SAA integrating new analysis | Post user trial question_2 | Neutral | Once there's an issue side, like adding a user story related to issues, that part also expands. I think it's on the right. Focusing on the errors that come from there is also revealed. This way, the relationship between the user story and the errors starts to be discovered. |
69 | 5 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Neutral | I chose Peter because he was one of the last committers. |
70 | 2 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Neutral | I think I found the wrong result, but Michael had quite a lot of commits. I saw the number of commits, so I chose him. |
71 | 1 | 1 | software visualization complexity management | User trial task 1 | Neutral | Despite the beauty of visuals, it actually started in different places before. Maybe you've encountered it a few times as well. Normally, summarizing everything visually seems great, but suddenly realizing the vast size of the dataset we're working with makes us need to narrow it down again. For instance, when you first lay out all the projects in the trace, you can't understand anything. It's impossible because such a large structure emerges. Here, for instance, there were 7,000 nodes and 23,000 relationships. In a project, you easily reach that number within a year. By the third or even the second year, the numbers become very large. You need to transform it into a structure focused on interest. |
72 | 1 | 1 | software visualization complexity management | User trial task 1 | Neutral | Actually, the issue is focused visualization, something that both human perception can handle and can be engaging. I'm saying it's beautiful because if you put everything together, it becomes an incomprehensible structure. |
73 | 9 | 2 | use of visualization tools | pre_survey_3 | Neutral | For example, with the Azure DevOps I am currently using, I use it for both work items and source control items. There are normal relational visuals, but we cannot fully understand the power and capabilities of the visualization being referred to here. So, yes, they do visualizations. You can track, navigate, and move from one item to another, following the work. |
74 | 7 | 2 | use of visualization tools | pre_survey_3 | Neutral | Not much visualization, but we were using SonarQube for code analysis. Apart from that, tracking code developments with Jira is not very easy because you can't know what will come up in the code. There's no visual aspect. Just for code analysis. |
75 | 6 | 2 | use of visualization tools | pre_survey_3 | Neutral | Actually, we used different tools for different purposes. We also used SonarQube, for static code analysis. Similarly, we tried around 7-8 tools for measuring engineering productivity and working metrics of engineers. We started with one, then moved to another. This is our experience. If you ask what we use now, we use a tool called Swarmy. What does Swarmy do? It analyzes data related to Jira and presents various graphics on dashboards. It also generates metrics from its own CI/CD tools, providing a general organization. |
76 | 8 | 2 | use of visualization tools | pre_survey_3 | Neutral | We don't use it, but we use SonarQube, though I didn't think of it as very visual. It guides the code more. Also, we log all mentions on Github. We can see who did what through various graphs there. |
77 | 7 | 2 | SAA software artifacts | pre_survey_5 | Neutral | In the end, source code is always changing. For example, you draw the diagram initially, and it stays there. 90% of the time, it remains in its initial state, and no one updates it later/ |
78 | 8 | 2 | SAA software artifacts | pre_survey_5 | Neutral | I can talk about the less common ones, like UML. These are generally found in more established, corporate settings, I guess? They might exist in such companies. But in start-up environments, because companies need to be more profitable. For example, I haven't used it at all, unless we're counting databases, in the last 3-4 years, not even once. A decision is made in a meeting and that's it. That document is lost and gone forever. |
79 | 7 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | These things usually don't come up. You see who is an expert from the code they write. You give them a task, they solve it, and you think, "Yes, this is good," and you give them more tasks. |
80 | 8 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | If a team is established, everyone knows each other after a while. It's like that. We don't have a tool, but everyone has an idea in their heads, and it's the same with us. |
81 | 9 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | It's the same with us. If there's an issue with a service, everyone knows who understands it better. There's a mental mapping, but we don't use a tool to identify it. |
82 | 10 | 2 | use of expert or reviewer suggestion tool | pre_survey_6 | Neutral | But Git blame can sometimes give misleading results. An issue can be assigned to the wrong person. |
83 | 7 | 2 | anomaly detection | pre_survey_7 | Neutral | We don't have such a tool, but after an incident, we usually hold a meeting. We discuss and identify process smells there. |
84 | 9 | 2 | anomaly detection | pre_survey_7 | Neutral | We have some metrics. If a 1,000-line code review document is approved in 2.5 hours, why? Or if a 1,000-line code is approved in 2 minutes, you can't read 1,000 lines in 2 minutes. We do this by sampling. We get documents and ask the reviewers if the PR was closed too quickly. They might say it was a configuration change, so they knew about it. It's a process we follow, not tool-based. These metrics are collected during the sprint, and anomalies are investigated. |
85 | 9 | 2 | anomaly detection | pre_survey_7 | Neutral | There are metrics like time, code lines, and certain people who need to review according to the repo. These are related to the code review process. We also have corporate processes related to the bug tracking process. Maybe that applies here too? I'm not sure. A bug is not assigned immediately to someone. It first goes to the technical manager of the relevant module, who analyzes it and then directs some to others, sends some to the board, etc. It's already a corporate process. |
86 | 1 | 1 | software visualization | pre_survey_4 | Positive | It's natural, I think it's an evolutionary situation. Our visual intelligence is evolutionarily more advanced than our other senses. |
87 | 1 | 1 | software artifacts graph model inclusion | pre_survey_5 | Positive | Well, according to your Graph model, you already encompass all four. There are files, issues, commits. Pull requests are already artificial artifacts in between, so I might be completing it for that reason. |
88 | 4 | 1 | external tool usage concern | Demo | Positive | When I first started as a junior developer, I didn't have enough experience to find a reviewer. I was so desperate that using another tool wouldn't have bothered me that much. |
89 | 3 | 1 | SAA reviewer/expert recommendation analysis | Demo | Positive | So, scoring who has been more involved with this file can make my job easier when assigning reviews. |
90 | 1 | 1 | SAA incremental graph model | Post user trial question_1 | Positive | Of course, it can be added. (new artifacts) |
91 | 1 | 1 | SAA drawing canvas | Post user trial question_3 | Positive | It's a nice, commonly used canvas |
92 | 2 | 1 | SAA Software Artifact Inspection from drawing canvas | Post user trial question_3 | Positive | I think the visual representation and the things done here have a benefit for companies. We always try to find measurable things about people's performances or what they do. Providing such a history and showing progress over time, for instance, showing that Hans had issues with certain commits and bugs, but a year later, he improved. Putting such things into measurable terms is really valuable. This is something we always lack. |
93 | 2 | 1 | software visualization | Post user trial question_3 | Positive | Seeing this visually is really important as it provides evidence. Usually, these things are done based on gut feelings, or you look at who closed the most issues, but here you can see much more detailed and valuable insights. |
94 | 1 | 1 | SAA integrating new analysis | None | Positive | There could be another area where you can create value. Especially if you link this to builds or deployments. What is basically included in this build, what different things were committed, based on pull requests and user stories. Similar things are done in some aspects, if you remember. You are visualizing that. There can be some value derived from that. |
95 | 1 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | It’s a somewhat obvious question because one involves human effort and potential errors, while the other is automatic. It's a self-confirming thing. |
96 | 4 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | Also, when looking at GitHub, I sometimes feel like I might have missed something. I know I'm guessing, but here, since I get a suggestion, it feels a bit more reliable. With GitHub, I worry if I missed something. |
97 | 1 | 1 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | This also feels complex to me (Github Analysis). At some point, the file size increases and it becomes complex, exceeding human limits. |
98 | 1 | 1 | software visualization | User trial task 1 | Positive | Visualization, as you initially said, is a stronger area for our perception. Compared to verbal or written structures, visual structure is much better. I think we first have visual then written perception. Visuals inherently have high value. |
99 | 4 | 1 | anomaly detection | User trial task 2 | Positive | Well, JQL is generally very difficult, not user-friendly at all. So personally, I liked having this as an alternative. |
100 | 3 | 1 | SAA reviewer/expert recommendation analysis | Post trial question 3 | Positive | I would definitely use the scoring part (expert, reviewer part). That's one of my favorite areas. I think it could be further developed. |
101 | 3 | 1 | SAA Software Artifact Inspection from drawing canvas | Post trial question 3 | Positive | Some of our tasks are very detailed, but at some point, it would really make our job easier. "Oh, really? What happened next? Who reviewed it? What issues were there?" Visually, it would be really useful, and we would probably use it. |
102 | 3 | 1 | software artifact tracebility graph construction | Post trial question 3 | Positive | but I think our strongest point, which might not have been discussed here, is being able to take all this data, create a database, and visualize it. It's a really nice feature. |
103 | | 1 | anomaly detection | Post trial question 3 | Positive | For me, the part that caught my attention was the anomaly detection section. I think the anomaly section is very valuable. I would definitely use that. |
104 | 2 | 1 | SAA integrating new analysis | Post trial question 3 | Positive | . I think a world where it's easier to integrate new things would be better. |
105 | 6 | 2 | CASE tool selection | pre_survet1-2 | Positive | The community already supports it (Github and Jira). More importantly, when you use these two products, you can easily use other tools as well. For example, if you want to use any analytical platform, they all support these two. So, you don't need another tool. |
106 | 6 | 2 | software visualization | pre_survey_4 | Positive | Absolutely, it's better to have visual support than walking blind. Metrics are everywhere, in logs and other places, but being able to see them at a glance, to say "Ah, there's a problem here," is very important. |
107 | 9 | 2 | software visualization | pre_survey_4 | Positive | I thought visualizing could be useful to quickly catch patterns at a glance. For example, bugs always assigned to the same person or a bug constantly pointing to the same source code. It's much faster to see this visually rather than in text. So, it indicates there might be an issue with this source code or this file. It helps to catch the pattern at a glance. But of course, it needs to go into details. I might not be able to comment on the details, but I think it helps to catch the correct pattern at first glance. |
108 | 9 | 2 | SAA percpective | Demo | Positive | If I take this to our company, it would be actively used from the first day. For our product, I'm saying. We are writing a product, and since we have a team of 70 people, it works well. Because there are teams separated by domain. When one domain writes an integration for another, they know their own team well but may lack information on who to ask in the other team. Of course, all of this can be solved through communication, but with so much remote work nowadays, you have to get up and ask the person next to you. Instead, you write it in the metrics and everyone sees it. I would make them buy this tool. |
109 | 9 | 2 | SAA percpective | Demo | Positive | Maybe others will agree about corporates. In corporates, it's really hard to measure someone's performance. In startups or places where the entrepreneur provides the capital, they already closely monitor the person. Or there are fewer people, so it's quickly visible. There was a law called social accountability or something like that. In a team of 50, there are 5 people who don't work, and it's very hard to find and prove that, and there's no penalty according to labor laws. If we had a tool that says, "You keep getting bugs in your code, the analysis tool says so," we could have a one-to-one meeting and after 3-5 meetings, say, "We have to part ways because this tool shows your performance isn't good." We actually care a lot about these metrics for that reason. If you look at 1000 lines of code in 2 minutes, there must be an explanation, or it means you just looked at it superficially. So, for companies like ours, corporates might benefit more. |
110 | 10 | 2 | SAA reviewer/expert recommendation analysis | Demo | Positive | I agree with this usage. In teams, from a developer's perspective, there are 6, 7, 8 people. Within the team, I know who wrote what. Sometimes we need to go to a point touched by another team. I don't know who wrote what in the other team, but by looking at this tool, I can understand who knows what. That seems quite advantageous to me. |
111 | 8 | 2 | SAA incremental graph model | Post user trial question_1 | Positive | I think the relations are quite comprehensive and sufficient. |
112 | 9 | 2 | SAA drawing canvas | Post user trial question_3 | Positive | I would use it too, it would be useful for me. I'm in BevOps, for example. When a problem arises somewhere, I can quickly go and find the person responsible without asking anyone. "Why are we breaking these?" I often say. |
113 | 7 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | Well, I wrote what the tool suggested. The tool's suggestion made sense to me. |
114 | 6 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | This actually shows that it was a difficult question. It indicates that it can't be solved at a glance and that tool support is required. |
115 | 6 | 2 | SAA reviewer/expert recommendation analysis | User trial task 1 | Positive | Searching manually is more difficult, of course. The tool directly shows the results and reasons. |
116 | 7 | 2 | SAA value | Post trial question 3 | Positive | We could find the person who works the most on the project. Let's not miss this. We can say, "This person is good." It guides us. |
117 | 9 | 2 | software visualization | Post trial question 3 | Positive | Many tools don't provide visual communication. And I can navigate and see, I can examine relationally. It's useful because it's multifaceted. Actually, in our tools, we can say something according to something. But if there is such data in an issue area and the developer did such a job, it allows us to have multifaceted data. |
118 | 10 | 2 | SAA graph layout | Post trial question 3 | Positive | The expandability of the graph is very good, I think. I first... |