Social Software Engineering

In the social computing Social Computing is an area concerned with the intersection of social behaviour and computation, that
encompasses fields such as social networking, collaborative work, and collective intelligence.

See "Social Computing", introduction to Social Computing special edition of the Communications
of the ACM, edited by Douglas Schuler, Volume 37 , Issue 1 (January 1994), Pages: 28 – 108.
space, my students and I have developed quantitative methods for the assessment of qualitative questions regarding human behaviour, such as trust, health and related computations in online social networks, human mental workload assessment, recommendation, market trading, and most recently to questions in the domain of social software engineering. Social Software Engineering is a branch of software
engineering that is concerned with the social aspects
of the software development process and the study and
construction of socially-oriented tools to measure
and support collaboration and knowledge sharing in
software engineering. It can also be viewed as a
specialism of computational social science.
The technical basis of the approach is the application of defeasible/non-monotonic argumentation schemes, borrowed from proponents of the strong AI model such as John L. Pollock, For example, see Pollock, J. L. (1987). Defeasible reasoning. Cognitive
science, 11(4), 481-518.
and Pollock, J. L. (1995). Cognitive Carpentry:
A Blueprint for how to Build a Person. MIT Press, Cambridge, MA, USA.
but applied to the assessment of human behaviour rather than the replication of human decision making, and structured as computationally efficient filtering and ranking schemes for the execution over large-scale, complex and heterogeneous behavioural data sets.

We are now applying this approach exclusively to challenges in social software engineering, treating software development as a computational sociological phenomenon , but with particular focus on the contributing role of individuals in the collaborative creation of the software artefact.

Understanding code as the primary artefact of a social network process makes possible a variety of social analysis using frameworks such as Alex Pentland's For example, see W. Pan, W. Dong, M. Cebrian, T. Kim, J. H. Fowler and A. S. Pentland, "Modeling
Dynamical Influence in Human Interaction: Using data to make better inferences about influence
within social systems," in IEEE Signal Processing Magazine, vol. 29, no. 2, pp. 77-86, March 2012.
social physics model of influence, social learning, and peer pressure between individuals. Moreover, it allows for the development of defeasible schemes of idealised behaviour of network participants, elucidated from theory regarding best software engineering practice, that can be used to differentiate observed engineering practice amongst development teams, and thereby trace and attribute the impact of observed behaviours on process efficiency and quality. By combining an analysis of group dynamics with a fine-grained, code-centric analysis of the individual's behaviour and performance, qualitative questions of software engineering practice can be addressed.

The purpose of this research and technology agenda is to provide developers, managers and educators with insight into individual and team contribution and performance in software engineering, and to facilitate real-time measurement and incentives to guide practice improvement. We are developing a cloud based technology that heavily instruments the development process, both in terms of the measurement of input behaviour, and in terms of the longitudinal assessment of emerging code quality, thereby yielding a highly scaleable behaviour-measurement platform that is tailored to analysis of software development processes and the contribution of individuals within them.

Application

In the educational space, I seek to externalise See R. Susskind and D. Susskind. (2015). The Future of the Professions:
How Technology Will Transform the Work of Human Experts. Oxford University
Press.


The book predicts a methodology whereby professional activities, largely
delivered at present on a craftsmanship basis, will be decomposed into tasks
genuinely requiring human professional intercession, and tasks that can,
through a process of standardization, systematisation and automation, be
externalized through delivery as autonomous services that match or exceed the
capability of present professional delivery.
the assessment of individual and team based formative software engineering tasks through fine grained monitoring of development behaviour and gamified Gamification is the application of typical elements of gameplay, such as points
scoring, rules of play and competition, to other areas of activity, typically
with the goal of enhancing motivation, participation and engagement.

For example, see J. Hamari, J. Koivisto and H. Sarsa. (2014). Does gamification
work? A literature review of empirical studies on gamification. pp. 3025-3034,
47th Hawaii International Conference on System Sciences (HICSS), IEEE.
reflection of performance. This will provide a comprehensive toolset for automated best practice guidance and grading in student lab work, that can compliment more traditional assessment of output artefacts such as submitted software. At present we tend to grade the ‘what’ rather than the ‘how’, leaving the learning and assessment of professional practice development unsolved.

Second, I wish to provide tools that better support the orchestrated application of professional software engineering practice in team settings. Software development is a fundamentally complex creative process, implemented by highly skilled people, constructing systems that Frederick Brooks See F.P. Brooks. (1987) "No Silver Bullet. Essence and Accidents of Software Engineering",
Computer, vol.20, no. 4, pp. 10-19, April, doi:10.1109/MC.1987.1663532.


Brooks has also memorably observed that ‘the central question in how to improve the
software art centers, as it always has, on people.
considers to be 'more complex for their size than perhaps any other human construct.’ However, measurement of individual performance and team role, remains a challenging, unsystematic activity in the industry. The challenge then is to better assess the real performance and skill sets of developers and their teams, as automated, actionable measures and insights.