In today’s increasingly connected world, information flows and spreads. Messages get retweeted, photos get reposted and videos go viral. Such online interactions can have real-world consequences, as demonstrated by protests such as Occupy Wall Street and the Arab Spring, in which protestors communicated and spread ideas via social media.

Such information-spreading may seem unpredictable, dependent on the fickle and ever-changing whims of Internet users.

But what if we could predict what things will go viral? And not only predict such outbreaks, but influence them? Could we prevent the spread of potentially dangerous information, or get an important message out to as many people as possible?

Answering these questions is the goal of a U.S. Army-funded research initiative, the Network Science Collaborative Technology Alliance, of which UCSB professors of computer science Ambuj Singh, Xifeng Yan and Tobias Hollerer are key participants. The team is into their fourth year of a 10-year study that is funded by approximately $1 million per year.

Singh said that his team is looking to find common principles, algorithms and tools that govern network behavior.

“The aim of the project is to take networks that are evolving [and] changing over time and find the regions that are changing,” Singh said. “Then model them to attempt to explain why they are changing.”

To accomplish this, Singh said that the team built a model for each user on the network based on their preferences and interests, as well as a model for messages based on the information they contain. When a message reaches a user, the system attempts to determine, based on the user’s preferences and the message’s content, whether or not that user will share it with others — in essence, whether or not the information will spread.

“You build a model for every agent, and for how often information is being transmitted on the link, you have a model for what is in that message. Is this a message about politics? Is this a message about sports? Based upon that you try to understand the network dynamics,” Singh said. “We can look at a network, and based on the behavior of the agents, we can say that if you want the information flow to happen in the fastest possible manner, then this is what you should be doing.”

Using these methods, the researchers have identified key factors in the flow of information and discover techniques for influencing a message’s spread.

Hollerer said that the team also worked on modeling trust between users of social networks, with applications to areas such as online recommendation systems, in which trust can play a key role.

“An important aspect is how do you model and assess trust, how do you quantify it, and how does it develop?” Hollerer said. “Because if you know that, you could actually do the right things to have that happen.”

According to Hollerer, a malicious user who understands these techniques could falsely create trust, while a similar user could use them to guard against such impostors.

“Clearly it opens up the danger of misusing these techniques as well, and then you have to see how you shield against that,” Hollerer said. “If you understand how that works, is it possible that somebody did that in order to improperly give the impression that the information is more trustworthy than it actually is.”

The team’s work is focused primarily on understanding, and while there is a potential for misuse, such understanding could be used to develop preventative measures against malicious users and supplement society’s understanding of online social interactions.

“Our application examples are all for giving you more and better information, but you could turn that around and spread misinformation,” Hollerer said. “But the same techniques can be used to shield against that.”

 

 

A version of this article appeared on page 9 of May 21st’s print edition of the Daily Nexus.

Print