Blockchain-Based Startups Empower Individuals to Identify Fake News

Blockchain-Based Startups Empower Individuals to Identify Fake News

Back in 2017, a man named James McDaniel confessed that he had created a fake news website to test how gullible Internet readers could be. “As I continued to write ridiculous things they just kept getting shared and I kept drawing more viewers,” he told Politifact. In under two weeks, more than 1 million people had viewed and shared his inventions on social platforms, contributing to the vast spread of misinformation that characterized the 2016 US presidential election. 

US citizens haven’t been the only ones affected by fake news. It’s been widely documented that misinformation also played a role in the UK Brexit vote and the 2018 Brazilian elections, which is why individuals, organizations, and governments around the world are starting to pay close attention to what’s true—and what’s not. 

Cracking Down on Clickbait

Enter tech startups like Blackbird.AI, a San-Francisco-based company that seeks to ensure content is truthful and credible. Misinformation, the founders explain on their Medium page, “creates a cause and effect that will change everything from a political election to a social belief system.”

They seek to empower publishers, campaigns, businesses, governments, and citizens to catch fake news before it has a chance to take off. Their strategy: to introduce labels that evaluate content based on “credibility signals” such as who the author is, how the content was funded, and its ad quality score. 

The labeling system is like a nutrition label for news, meaning that it allows readers to quickly and accurately evaluate how trustworthy a piece of online content is. By quickly helping people figure out what’s trustworthy and what’s not, it may cut down on the sharing and clicking on of false but shocking headlines. 

To assign each piece of content a label, Blackbird.AI uses artificial intelligence to analyze millions of articles, track patterns, and classify news based on the credibility signals. Their technology scans articles, references, websites, social media pages, and memes—all in real-time. And, once the content is verified, it’s inscribed into the blockchain ledger for eternity. 

Photo Evidence 

Startups aren’t the only ones putting trust in the blockchain. Established publishers like the New York Times, for instance, are experimenting with blockchain as a way to validate their online content. 

Their research and development team is planning to use Hyperledger Fabric’s permissioned blockchain to store important information about where the photo was taken, who the photographer was, and details about how it was edited. 

Social Butterflies 

Even social media sites are trying to do their part, as they should: 68% of Americans rely on them for their daily news. Google is debuting new fact-checking tools and promoting high-quality content, as well as donating a lot of money to global initiatives that help kids tell the difference between real and fake or misleading news. And Facebook, LinkedIn, and Twitter have all removed huge numbers of bogus accounts from their platforms.  

But according to Civil, another blockchain-based news startup, still just 23% of people say trust news on social media, which means that it might be up to the blockchain disruptors to reform the system entirely.  

There’s a rising wave of support for exactly that. Rather than scoff at their efforts, traditional, well-known publishers are giving a nod to what these startups are doing. “As the footprint of traditional newspapers is shrinking,” CBS News notes, “Civil’s is growing.”  

The Columbia Journalism Review chimes in: “[Creating a new form of money] has the potential to help realign the incentives that underlie the journalism business.”  

It’s true that the fight against misinformation has only just begun. Hackers and writers of fake news steadily evolve their tactics, and nothing beats good old critical thinking. As fake-news writer James McDaniel notes, visible disclaimers on his web pages explaining that his posts were “fiction, and presumably fake news” went largely unnoticed and unread by the millions who shared them.  

Yet with startups like Blackbird.AI and Civil making it easier for Internet readers to identify and discount fake and misleading content, there’s hope for the future. 

As Civil puts it: “You’ll have access to the news you need and can trust what you read.”  

Image via SuperRGB on Unsplash

Terminating Fake News with Blockchain

Sometime in 2018, the American Herald Tribune website published a story about Friedrich (anglicized as ‘Frederick’) Trump, Donald Trump’s grandfather. The copy claimed that Frederick was a pimp and a regular drug user. The story went on to say that the man amassed a fortune running several brothels.

More recently, stories began to circulate about climate activist Greta Thunberg’s identity. Such stories claimed that Thunberg is not actually real, but a character played by a young actress named Estella Renee. These news stories, truculent and striking as they might be, are not true. They are fake news.

Fake news and deception in history

The use of misinformation is by no means a new occurrence. Deception tactics have been used in warfare for centuries. There is evidence that deception tactics were used during the Roman-Persian wars, for example, but Operation Fortitude is perhaps one of the best-known examples of a large-scale deception campaign in history. Fortitude was implemented as a prelude to the D-Day Landings, in the closing stages of WW2. The operation’s goal was to divert the German armies’ attention from the intended landing sites across Normandy by creating the illusion that Allied troops would land elsewhere.

In most recent times, deception and misinformation have morphed into the catch-all term ‘fake news’, which has become a widespread scourge in modern media.

In the kingdom of fake news, the half-truth is king

The American Herald Tribune story as a whole was false, and the author/s probably knew it, but some elements rang true in it, and the writers well probably well aware of this too. There are conflicting stories as to how Donald Trump’s grandfather Frederick acquired his wealth. While it is true that he opened a hospitality establishment (The Arctic Restaurant and Hotel) in Bennett, British Columbia, back in 1897, it is also widely believed that he used this hotel as a brothel, a practice that reportedly made him a very rich man. This wealth would trickle down the family tree, eventually landing at Donald Trump’s lap.

It is this half-truth facet that makes fake news believable. But fake news needs one key element to warrant their existence, one that those bent on spreading misinformation cannot do without–an audience.

We live in an era where social media reigns supreme. People spend a great deal of their time online: Shopping, reading, gaming, learning, and a myriad of other activities can be experienced through reflective screens. It is this perma-presence in the online world that makes it so ready-made for the propagation of fake news.

About 2 billion people have Facebook accounts. That’s around one-third of the world population. Such huge uptake creates a fertile ground for the rapid spread of misinformation or fake news, and the bad news is that, so far, there hasn’t been any effective means to combat this scourge.

How can blockchain help to kill fake news

Facebook, Instagram, and a myriad of other apps and media providers bombard us with text and video content every minute of every day, and in the virtual world, telling what’s real and what’s not is a rather difficult task, particularly if there is a concerted and well-organized campaign behind the effort.

Blockchain has emerged as an effective weapon to counteract the insidious effects of fake news. The technology is characterized by its inherent decentralization and transparency, which effectively means that there is no central authority regulating it, and the network state is visible to every participating node.

Blockchain’s traits enable a tamper-proof mechanism whereby any data stored on the chain becomes immutable. This can be applied to anything: Shipping information, medical records, and of course, news items.

Take any photograph, video, or news content found online. How can you really tell whether it’s real or fake? Blockchain can add immutable metadata to any content, thus creating a digital history that will follow that particular piece of content anywhere it goes. This digital history can be traced and verified anytime, anywhere, so there is no possibility of changes along the way.

Conclusion

There is evidence suggesting that the American Herald Tribune originates in Iran, which may explain the Friedrich Trump story, among others. Fake news is often the first salvo in a multi-layered misinformation campaign to fulfill an agenda, whether it stems from an individual or a government. Because of the inherent dangers of fake news, many agencies are fighting to terminate the threat.

Commercial entities like Orange, of France’s largest telecommunications companies, and the New York Times, are developing blockchain-powered projects to fight, and hopefully, eliminate or greatly reduce the prevalence of misinformation, deepfakes, and fake news in the media. How successful these efforts will remain to be seen, but it is at least a step in the right direction.

Image source: Roman Kraft via Unsplash

US Banking Giant Patents AI Fact Checker to Simplify Investing in Crypto

Capital One Services, a subsidiary of US banking giant Capital One has patented a new artificial intelligence system to guide human cryptocurrency traders through the complicated world of misinformation in the digital assets space.

Capital One Services is a subsidiary of Capital One and mainly deals in credit and car loans. According to the filing, the new patented system leverages AI technology to sort credible cryptocurrency information from misinformation for those looking to invest or trade.

The cryptocurrency market is more of a frontier than an established terrain, a frontier where human traders face unique and difficult to overcome obstacles in retrieving relevant and useful trading information.

Cryptocurrency markets operate 24 hours a day, every day, and deal with distinctive elements foreign to other markets like blockchain forks, exchange hacks, crypto airdrops, and a variety of crypto news sites as well as social media investors and advice.

The patent filing explains, “It would be impossible for human traders to track all of the above-mentioned cryptocurrency-related data and respond to that data in real-time. Further, it would also be difficult to verify the credibility of the cryptocurrency-related information in real-time. In particular, it is difficult to verify the credibility of speculation, rumors, opinions, and other information posted on social media and elsewhere.”

Three Components of AI Verification

Capital One’s AI verification system was filed with the U.S. Patent and Trademark Office under the Patent no. 10,679,229. The patent filing explains the AI tool consists of three major components.

The first major component relates to a specific algorithm that searches through a “plurality of modules” or multiple independent sources – for example different social media sites – for information it determines to be viable for crypto trading. Second, the AI then feeds the information back to a “credibility analysis engine,” which cross-references and determines whether the event is credible based on historical examples. If the info is valid the AI will check how the market responded in previous instances.

Finally, the AI verification tool then collects the information and aggregates it to make fast market trading decisions for the investor.

Additionally, the patent claims, “The machine-learning algorithm can also determine the reach … and how quickly the news spreads out, what investors said and felt … on social media as the news was spreading out.” The algorithm can determine how apprehensive or confident traders felt about the information.

While Capital One has filed the patent, it has not been launched as a product for investors yet and it is unclear if the AI system would be able to enact trades on behalf of its human investor without confirmation.

AI to Protect Investors

In the patent, Capital One reiterates its intention to protect investors from the crypto frontier. As with many US banks, Capital One initially blocked its account holders from purchasing crypto following the market implosion of early 2018.

Seemingly defending this 2018 decision, the patent filing remarked, “Many cryptocurrency investors rushed into the market without adequate knowledge and experience in either trading or cryptocurrencies. In fact, many of the cryptocurrency investors were trapped by short-term market movement and lost money quickly.”

“Bitcoin Has Depreciated in Half” Says Clueless Chechen Republic Leader Kadyrov

Ramzan Kadyrov, the Head of the Chechen Republic has a strong distrust of Bitcoin and cryptocurrencies, along with a profoundly confused and incorrect evaluation of the BTC price and Bitcoin’s ability to act as a store of value. 

The Head of the Chechen Republic, Ramzan Kadyrov recently shared some harsh criticism, along with some ignorance of Bitcoin—as cryptocurrencies grow in popularity among Chechnya’s citizens.

In an article by Pravda on Aug. 31, Kadyrov was extremely critical of the media’s portrayal of digital assets, and particularly Bitcoin being represented as a digital gold. The dictator claims to believe that Chechens are being duped on “how to get rich quickly with the help of cryptocurrency.”

Kadyrov said:

“People take loans, save on themselves and their families, invest their last money in digital assets that promise incredible profits.”

While the dictator’s warning appears to have the Chechen peoples best interests at heart—it was clear that Kadyrov was hopelessly uneducated on the subject as he tried to warn Chechen citizens that excess profit always goes together with excess risk. He said:

“For example, over the past month, Bitcoin has depreciated in half.”

Although it may be unnecessary to disclose this information to anyone who has followed crypto at all this year, the Bitcoin price has in fact made tremendous gains over the past couple of months and is being widely accepted as a new form of digital gold by enterprises and educated investors.

The Chechen Leader believes that crypto offers high risks and fills the people he rules over with dreams of easy money and argues that he is more concerned about the moral side of cryptocurrency investment.

He concluded the piece:

“A person who invests in cryptocurrencies expects their value to grow many times. But why is he waiting for this? Did the person work hard to get this profit? Did the money he invested help other people? […] No, on the contrary, the price of such cryptocurrencies is growing only due to the greed of people who have invested in them, trying to attract new investors and profit from their greed.”

Kadyrov is the Authoritarian Leader of Chechnya which falls under the Russian Federation. The unelected dictator said that he will not support any projects which leverage digital assets and cryptocurrency.

The Russian Federation passed its first major legislation regarding cryptocurrencies on July 28. However, the country’s Central Bank continues to treat the crypto industry as a criminal field and thinks it facilitates illegal dark transactions.

Additionally, research from Paxful peer-to-peer Bitcoin marketplace has released some important data regarding the use of its p2p trading platform in Russia. The study indicated that Russians were increasingly leveraging cryptocurrency to escape the corrupt monolithic Russian banking system.

TikTok Crypto Influencers Mislead Viewers

TikTok, the popular social media platform, has become a go-to source of information for many young people today. However, a recent study conducted by daapGamble reveals that over one-third of cryptocurrency influencers on TikTok are sharing unvetted misinformation about Bitcoin and other cryptocurrency investments. Many of these influencers are promoting crypto investments without properly warning viewers about the risks, convincing unwary investors to put their hard-earned money into cryptocurrencies that are likely to lose value.

The study analyzed 1,161 crypto-related videos on TikTok, which used the hashtag “#cryptok.” More than one in three of these videos were found to be misleading, while just one in ten videos contained some form of disclaimer about the risks of investing. Additionally, 47% of the crypto influencers were found to be pushing services for their own profit.

The potential financial risk for unwary investors is high, with one in three misleading videos on TikTok mentioning Bitcoin. Furthermore, videos using popular crypto-related hashtags, such as #crypto, #cryptoadvice, and #cryptoinvesting, have cumulatively garnered over 6 billion views. However, viewers often overlook the ill intent of influencers and trust their content purely based on its high number of views or likes.

The study found that both new and seasoned investors should do extensive research on crypto projects before making any form of investment. While the reach of crypto influencers is smaller than that of mainstream celebrities, such as Kim Kardashian, Jake Paul, and Soulja Boy, the potential financial risks for unwary investors remains equally high.

In recent years, many mainstream influencers have been accused of promoting cryptocurrencies to their millions of fans without disclosing the payments they received. For instance, the United States Securities and Exchange Commission forced Kim Kardashian to pay $1.26 million in penalties for promoting EthereumMax (EMAX).

In April 2022, a $1 billion lawsuit was filed against crypto exchange Binance, CEO Changpeng Zhao, and three crypto influencers for allegedly promoting unregistered securities. The Moscowitz Law Firm and Boies Schiller Flexner, who filed the lawsuit, called this a classic example of a centralized exchange promoting the sale of an unregistered security.

In conclusion, while TikTok can be an excellent source of information, viewers are advised to exercise caution when it comes to crypto influencers and do their own research before making any investments.

Over 1 in 3 TikTok Influencers Post Misleading Crypto Content

A recent study by dappGambl has revealed that TikTok influencers are posting misleading videos about cryptocurrency investments, with over one in three videos found to be deceptive. The social media platform has become an alternative to Google searches for many individuals, particularly younger generations. However, some influencers have been found to share unvetted misinformation on crypto investments, often trying to convince unwary viewers to put their hard-earned money into loss-making cryptocurrencies.

The analysis of over 1,161 TikTok videos with the hashtag “#cryptok” revealed that only 1 in every 10 cryptok accounts or videos contained some form of disclaimer that warned users about the risk of investments. Additionally, out of the lot, 47% of TikTok creators were found trying to push services to make money. This lack of accountability and transparency highlights the need for better regulation in the social media industry.

The potential financial risk for unwary investors remains equally high, despite TikTok influencers having a smaller reach than their mainstream counterparts. The study also discovered that popular crypto-related hashtags such as crypto, cryptok, cryptoadvice, cryptocurrency, cryptotrading, and cryptoinvesting have cumulatively churned over 6 billion views on TikTok. The platform has become a breeding ground for unverified information on crypto investments, causing viewers to overlook the ill-intent of their favorite influencers and trust their content purely based on the high number of views or likes.

The consequences of this trend are severe, with individuals investing their hard-earned money into cryptocurrencies without proper research, often resulting in significant financial losses. The United States Securities and Exchange Commission (SEC) has also cracked down on the promotion of cryptocurrencies by influencers. The SEC forced Kim Kardashian to pay $1.26 million in penalties for the promotion of EthereumMax (EMAX). Other mainstream influencers such as Jake Paul and Soulja Boy have also been accused of promoting cryptocurrencies to their millions of fans without disclosing payments received.

On April 2, a $1 billion lawsuit was filed against crypto exchange Binance, its CEO Changpeng “CZ” Zhao, and three crypto influencers for promoting unregistered securities. The Moscowitz Law Firm and Boies Schiller Flexner, who filed the lawsuit, referred to the case as a “classic example of a centralized exchange, which is promoting the sale of an unregistered security.”

In conclusion, the study by dappGambl highlights the need for stricter regulations and accountability measures for social media platforms. Both new and seasoned investors are advised to do extensive research on crypto projects prior to making any form of investment. With the potential financial risk for unwary investors remaining high, it is crucial that social media platforms such as TikTok take responsibility for the content shared by their influencers, and ensure that users are properly warned about the risks of investing in cryptocurrencies.

Senator Michael Bennet Urges Tech Giants to Curb AI-Generated Misinformation

U.S. Senator Michael Bennet from Colorado has today called on leaders of prominent technology and artificial intelligence (AI) companies, including Meta, Alphabet, Microsoft, Twitter, TikTok, and OpenAI, to implement proactive strategies to combat the proliferation of misleading AI-generated content.

Bennet emphasized the need for identifying and labeling AI-generated content, highlighting the potential risks associated with the unchecked spread of misinformation. He stated, “Online misinformation and disinformation are not new. But the sophistication and scale of these tools have rapidly evolved and outpaced our existing safeguards.”

The Senator pointed out several instances where AI-generated content caused market turmoil and political unrest. He also cited the testimony of OpenAI CEO Sam Altman before the Senate Judiciary Committee, where Altman identified the potential of AI to spread disinformation as a serious concern.

Bennet acknowledged the initial steps taken by technology companies to identify and label AI-generated content. However, he stressed that these measures are voluntary and can be easily bypassed. He proposed a framework for labeling AI-generated content and requested the companies to provide their identification and watermarking policies and standards.

The Senator concluded, “Continued inaction endangers our democracy. Generative AI can support new creative endeavors and produce astonishing content, but these benefits cannot come at the cost of corrupting our shared reality.”

Bennet has been a strong advocate for digital regulation, youth online safety measures, and enhanced protections for emerging technologies. He recently introduced the Digital Platform Commission Act, the first legislation in Congress to create a dedicated federal agency for overseeing large technology companies and protecting consumers.

This move by Senator Bennet underscores the growing concern about the misuse of AI technology and the urgent need for regulatory measures to ensure its responsible use. It remains to be seen how the tech giants will respond to this call for action.

WEF Report Warns of AI and Quantum Computing Risks

Significant concerns have been made in the study that was published by the World Economic Forum (WEF) in 2024 on the adverse effects that artificial intelligence technologies and the rising dangers of quantum computing may have. The findings in the paper highlight the negative effects that artificial intelligence has had on people, corporations, ecosystems, and economies. In addition to severe disruptions in the global employment market, these impacts include the propagation of false information and disinformation, the development of a mistrust of facts and authority, and the dissemination of misinformation.

It is important to note that the role that AI plays in the generation of material makes it harder to differentiate between fact and fabrication. This increases the potential of influencing public opinion via content that has been modified or deliberately produced. Concerns about the global labor market have been aroused as a result of the fast replacement of people by artificial intelligence across a variety of sectors, ranging from the entertainment industry to scientific research. This might possibly result in significant job losses and economic instability.

In addition to this, the paper emphasizes the potential disruptive impact that quantum computing might play. With its heavy reliance on artificial intelligence, this technology poses a danger to the current technological system and raises significant issues over security. In the realm of geopolitics, the incorporation of artificial intelligence into military applications poses ethical and human rights concerns, especially with relation to autonomous weapon systems.

In the paper from the World Economic Forum (WEF), several ramifications of artificial intelligence (AI) in healthcare are discussed. Particular attention is paid to the ethical use of data and the possible biases in medical research and development that favor more affluent people. In light of larger worries about artificial intelligence’s potential to widen economic inequities, particularly between high-income and low-income countries, it expresses worry that AI might make it possible to develop biological weapons that are more targeted and severe.

The World Economic Forum (WEF) has issued a demand for enhanced public awareness and education on artificial intelligence (AI) and on the regulation of AI. The critical need to govern this rapidly developing technology was brought to light by the recent global statement on artificial intelligence safety, which was endorsed by leaders from 29 nations and the European Union. On the other hand, opinions about the regulation of artificial intelligence differ from country to country. For example, the United Kingdom has decided not to regulate AI in the foreseeable future and will instead concentrate on innovation.

Binance Labs Clarifies Non-Involvement in SkyArk Chronicles' Claimed Investment

Binance Labs, a significant player in the blockchain and cryptocurrency investment industry, has addressed the ambiguity that surrounds its involvement in a funding round for SkyArk Chronicles, a project that was built by SkyArk Studios and was recently brought to light. There was a lot of doubt around Binance Labs’ participation in the fundraising round. After SkyArk Studios made the statement that they were going to raise $15 million in an investment round, which was apparently led by Binance Labs, the topic became a contentious one.

Binance Labs claims that prior to the announcement being made, they were unaware of the investment guarantee that was made by SkyArk Chronicles. this was the case before the announcement was made. This lack of prior contact was in direct contrast to the standard protocol, which says that projects are obligated to inform Binance Labs prior to making public remarks regarding investments in line with the terms of their investment contracts. This was a violation of the standard protocol.

The crew at SkyArk Chronicles was contacted by Binance Labs immediately after the news was reported, and they demanded that any mention of Binance or Binance Labs be erased from any communication that they had already made. It was determined, on the other hand, that the project had merely made the decision to delete the tweet without providing any explanation on the facts.

Binance Labs has made it very apparent that their major objective is to provide fair support for the goal of expanding and developing the projects that are featured in their portfolio. They pushed for a concentration on product creation rather than speculation based on misunderstandings about money, which they argued contributed to the uncertainty. When it comes to the blockchain and cryptocurrency industry, this event exemplifies the need of using communication tactics that are not just transparent but also truthful when making statements on investments and funding.

Trump Warns of AI and Deepfake Dangers in Fox Business Interview

Former President Donald Trump has voiced significant concerns over the potential dangers posed by artificial intelligence (AI), including the issue of deepfake technology. In a recent interview with Maria Bartiromo on Fox Business, Trump labeled AI as “possibly the most hazardous thing out there,” emphasizing the urgent need for action against the rapidly evolving capabilities of AI technologies. His comments shed light on the alarming potential for AI-generated deepfake videos to incite conflict and spread misinformation, reflecting a broader apprehension regarding the security challenges and ethical implications of advanced AI​​​​.

Trump’s critique of AI underscores the complexity and severity of the threat posed by generative AI technologies, which have witnessed exponential growth in recent years. The ability of AI to create deepfakes—highly convincing digital manipulations where individuals, including political figures, are mimicked—has been a particular point of concern for Trump. He recounted an incident where he was depicted in a deepfake video endorsing a product, highlighting the difficulties in distinguishing between real and manipulated content. This incident serves as a stark example of the challenges that deepfakes pose to individuals and institutions alike, raising critical questions about authenticity, trust, and the potential for misuse of technology in spreading disinformation and influencing public opinion​​.

The former president’s warnings about AI and deepfakes align with broader concerns raised by experts and policymakers about the ethical use of AI. The capability of AI technologies to generate realistic content that can fool even the most discerning observers presents a profound challenge to security, financial markets, and democratic processes. Trump’s call for immediate action echoes the sentiments of many who believe that regulatory measures, ethical guidelines, and technological solutions must be swiftly implemented to mitigate the risks associated with AI and deepfake technologies.

Moreover, Trump’s commentary brings attention to the need for a collective effort to address the implications of AI. As AI continues to advance at an unprecedented pace, the need for responsible innovation and the development of robust frameworks to ensure the ethical use of AI technologies has never been more critical. The potential for AI to be used in warfare and other nefarious activities further underscores the urgency of developing comprehensive strategies to govern the use of AI, ensuring that its benefits are harnessed while minimizing its risks to society.

In conclusion, Donald Trump’s critique of AI and deepfake technology highlights a pressing issue facing today’s digital and interconnected world. The potential for AI to be misused in creating convincing forgeries and spreading misinformation necessitates a proactive and coordinated response. As society navigates the challenges posed by these technologies, it is imperative to foster an environment of responsible AI use that prioritizes ethical considerations, transparency, and the protection of individuals’ rights and security. The conversation initiated by Trump’s comments serves as a crucial reminder of the ongoing need to critically assess the impact of AI on society and to take decisive steps towards safeguarding against its potential dangers.

Exit mobile version