Facebook admitted that year it had failed to do enough to prevent the incitement of violence in Myanmar.
Facebook said it has banned all remaining accounts linked to the Myanmar military on Thursday, citing the junta’s use of deadly force against anti-coup demonstrators.
The move, which takes effect immediately, applies to the military and entities controlled by the armed forces on both Facebook and Instagram.
It also bans “military-linked commercial entities” from advertising on the platforms.
“Events since the February 1 coup, including deadly violence, have precipitated a need for this ban,” the social media giant said in a statement.
“We believe the risks of allowing the Tatmadaw on Facebook and Instagram are too great,” it added, using the Myanmar name for the country’s armed forces.
The junta has steadily increased its use of force against a massive and largely peaceful civil disobedience campaign demanding Myanmar’s army leaders relinquish power.
Three anti-coup protesters have been killed in demonstrations, while a man patrolling his Yangon neighbourhood against night arrests was also shot dead.
Facebook said its ban was intended to prevent Myanmar’s generals “from abusing our platform”.
The military has used Facebook to boost its claims that voter fraud marred an election last November after ousted civilian leader Aung San Suu Kyi’s party won in a landslide.
Since seizing power, the junta has arrested hundreds of anti-coup protesters, ordered nightly internet blackouts and banned social media platforms — including Facebook — in an effort to quell resistance.
Thursday’s announcement follows Facebook’s earlier decision to kick off a page run by the regime’s “True News” information service after the tech giant accused it of inciting violence.
Pages for government offices now run by the junta remain unaffected.
“This ban does not cover government ministries and agencies engaged in the provision of essential public services,” the company said. “This includes the Ministry of Health and Sport, and the Ministry of Education.”
In recent years, hundreds of army-linked pages have been blocked by Facebook after the social media giant came under heavy criticism for its ineffective response to malicious posts in the country.
Junta chief Min Aung Hlaing and other top brass were booted from the platform in 2018, a year after a military-led crackdown forced around 750,000 members of the Rohingya Muslim community to flee into neighbouring Bangladesh.
Facebook admitted that year it had failed to do enough to prevent the incitement of violence in Myanmar.
“We can and should do more,” Facebook executive Alex Warofka said at the time.
Facebook said it would search for and remove content which praised the storming of the Capitol or encouraged the violence.
Twitter and Facebook suspended Donald Trump on Wednesday over posts accused of inflaming violence in the US Capitol, as social media scrambled to respond to mayhem by supporters buying into his baseless attacks on the integrity of the election.
The unprecedented sanctions came after the president took to social media to repeat his numerous false claims about fraud and other impropriety in the election he lost to Joe Biden.
“This is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video,” said Facebook vice president of integrity Guy Rosen.
“We removed it because on balance we believe it contributes to rather than diminishes the risk of ongoing violence.”
Facebook barred Trump from posting at the social network or its Instagram service for 24 hours, saying his messages were promoting violence.
Trump’s falsehoods, ranging from specific allegations to broad conspiracy theories, also prompted Facebook to change a label added to posts aiming to undermine the election results.
The new label reads: “Joe Biden has been elected president with results that were certified by all 50 states. The US has laws, procedures, and established institutions to ensure the peaceful transfer of power after an election.”
An activist group calling itself a mock Facebook oversight board said sanctions against Trump at the social network were long overdue.
“This is too little, too late,” the group said in a statement.
“Donald Trump has breached Facebook’s own terms and conditions multiple times. His account is not just a threat to democracy but to human life.”
– Permanent Twitter ban? -The crackdown came after Trump’s supporters stormed the US Capitol in an attack that led to one woman being shot and killed by police, interrupting congressional debate over Biden’s election victory.
The assault came after the president had urged supporters to march on the seat of government during a speech outside the White House in which he alleged baselessly that the election had been stolen from him.
He later released a video on social media in which he repeated the false claim — even telling the mob “I love you.”
YouTube removed the video in line with its policy barring claims challenging election results.
Twitter said Trump’s messages were violations of the platform’s rules on civic integrity and that any future violations “will result in permanent suspension of the @realDonaldTrump account.”
The messaging platform said Trump’s account would be locked for 12 hours and that if the offending tweets were not removed, “the account will remain locked.”
The platform said it would seek to take down additional calls for protests, including peaceful ones, if they violated a curfew imposed by the city of Washington, or any attempts to “re-stage” the storming of Congress.
“The violent protests in the Capitol today are a disgrace,” a Facebook spokesperson said.
“We prohibit incitement and calls for violence on our platform. We are actively reviewing and removing any content that breaks these rules.”
Facebook maintained that it was in contact with law enforcement officials and continued to enforce bans on QAnon conspiracy group, militarized social movements, and hate groups.
A #StormTheCapitol hashtag was blocked at Facebook and Instagram, according to the internet titan.
The suits allege Facebook sought to squelch competition by acquiring the messaging applications — Instagram in 2012 and WhatsApp in 2014.
United States federal and state antitrust enforcers filed suit against Facebook on Wednesday claiming the social media giant abused its dominant position and seeking to unwind its acquisitions of messaging services Instagram and WhatsApp.
Separate suits filed by the Federal Trade Commission and a coalition of state officials called for the divestment of Instagram and WhatsApp, services which are part of the Facebook “family” of applications.
“Facebook’s actions to entrench and maintain its monopoly deny consumers the benefits of competition,” said Ian Conner, director of the FTC’s Bureau of Competition.
“Our aim is to roll back Facebook’s anticompetitive conduct and restore competition so that innovation and free competition can thrive.”
A separate legal action was filed by antitrust enforcers from 48 US states and territories.
“For nearly a decade, Facebook has used its dominance and monopoly power to crush smaller rivals and snuff out competition, all at the expense of everyday users,” said New York state Attorney General Letitia James, who leads the coalition.
The action presages a fierce court battle seeking to force Facebook to divest the apps which have become an increasingly important element of the business model of the California giant and integrated into its technology.
Facebook said it would “vigorously” defend its actions and denied abusing its position.
“Antitrust laws exist to protect consumers and promote innovation, not to punish successful businesses,” Facebook general counsel Jennifer Newstead said in a statement.
“Instagram and WhatsApp became the incredible products they are today because Facebook invested billions of dollars, and years of innovation and expertise, to develop new features and better experiences for the millions who enjoy those products.”
Newstead added that these deals had been approved years ago by the FTC, which she said meant “the government now wants a do-over, sending a chilling warning to American business that no sale is ever final.”
Consumer harm? Some analysts argued the antitrust cases would have difficulty proving Facebook harmed consumers since its services are largely free.
Jessica Melugin of the Competitive Enterprise Institute libertarian think tank called the actions “political theater dressed up as antitrust law” and argued that “a billion consumers worldwide have benefited from Facebook’s purchase of Instagram and WhatsApp.”
Cleveland State University law professor Christopher Sagers said the case may have merit because Facebook “has been an unabashedly predatory and exclusionary bully in every sector it’s been involved in.”
But he also noted that “American antitrust law is now so hard to enforce in all cases, especially in cases like this, involving no conspiracy among competitors, and rather involving only one big firm’s unilateral conduct.”
Tiffany Li, a Boston University law professor who studies the sector, said that while Facebook has rivals bidding for internet users’ attention it has a big advantage because of its access to data.
“One company having exclusive ownership of vast amounts of user data, with no potential for interoperability or access to competitors, can be anti-competitive,” she said.
The FTC announced earlier this year it would review acquisitions made by five Big Tech firms over the past decade, opening the door to a wave of potential antitrust investigations.
The consumer protection agency said it would review deals made by Amazon, Apple, Facebook, Microsoft and Google parent Alphabet since 2010 amid growing complaints about tech platforms which have dominated key economic sectors.
The US Justice Department, which shares antitrust enforcement with the FTC, in October sued Google parent Alphabet, accusing the Silicon Valley giant of maintaining an “illegal monopoly” in online search and advertising and opening the door to a potential breakup. Eleven US states joined that case.
The social media giant should disclose any information it has relating to crimes against the Rohingya in Myanmar.
Last month, Facebook moved to block a bid by The Gambia in a US court, in which it sought disclosure of posts and communications by members of Myanmar’s military and police. This legal step is related to a case brought by The Gambia before the International Court of Justice (ICJ), in which it has accused Myanmar of genocide against its Rohingya Muslim minority.
The social media giant urged the US District Court for the District of Columbia to reject the “extraordinarily broad” request, saying it would violate a US law that bars electronic communication services from disclosing users’ communications.
In a consequent public statement, Facebook confirmed that it would not comply with The Gambia’s demand, but claimed to be cooperating with the United Nations Independent Investigative Mechanism for Myanmar (IIMM) – an investigative body established to collect and analyse evidence of serious international crimes committed in Myanmar.
A few days later, however, this was refuted by the IIMM head Nicholas Koumjian. Koumjian explained that while Facebook has indeed been in talks with the IIMM for a year, it had failed to share “highly relevant” material that could be “probative of serious international crimes” with the investigators. Again, a few days after this, there were reports – confirmed by the IIMM – that Facebook has shared the first data set that only “partially complies” with requests from the IIMM.
Facebook has stated that it supports “action against international crimes” by working with the appropriate authorities. However, this series of actions on the part of Facebook may lead to the opposite conclusion, and rather than supporting The Gambia’s legal efforts to bring the perpetrators to justice, is obstructing a case relating to genocide.
In August 2017, the Myanmar military launched a so-called “clearance operation” in Rakhine State, home to Rohingya and other ethnic minorities. Over several weeks, soldiers committed atrocities in the region, killing thousands, committing mass rapes, burning villages to the ground, and driving more than 700,000 Rohingya to flee into neighbouring Bangladesh.
Since then, it has been established that Facebook was used as a medium for the dissemination of hate speech as a precursor to these atrocities. In September 2018, in a report on the situation in Myanmar, the UN Independent International Fact-Finding Mission on Myanmar highlighted the role Facebook played in creating an enabling environment in the country for the commission of atrocities.
Around the time of the release of the report, Facebook suspended several Myanmar military accounts, including that of the head of the army, and subsequently commissioned a human rights impact assessment into its Myanmar operations. The latter was quite tepid, and the former, a case of too little, too late.
In November 2019, The Gambia filed an application at the ICJ, claiming that a conflict exists between it and Myanmar regarding the interpretation and application of the Genocide Convention, based on how Myanmar was treating the Rohingya population, which The Gambia claimed rose to the level of genocidal acts.
This was a legally unprecedented move – the first instance where a case was filed by a state not directly affected by the international crimes alleged. Nevertheless, The Gambia obtained an initial positive ruling this January from the court – a ruling relating to protective measures, which includes directions to Myanmar to cease and desist from certain actions that would violate the Genocide Convention, and to provide the court with regular updates on its compliance with the order.
However, The Gambia needs to take many more steps and overcome several hurdles to bring the case to a successful conclusion. One of these steps is to obtain more evidence that demonstrates the Myanmar military’s “genocidal intent” against the Rohingya. One likely repository of such evidence is Facebook.
Knowing that there is a trove of information accessible only to Facebook, which may shed light on various aspects of the international crimes alleged, in June 2020, The Gambia initiated legal proceedings in the US, to compel the company to hand over information that would be of assistance for the case before the international court.
The request, made in accordance with a US federal statute, was opposed by Facebook because it violates a US law that “protects billions of global internet users from violations of their right to privacy and freedom of expression”.
However, the provisions of the law invoked – Stored Communications Act, 18 USC 2702(a) – do not seem to be a complete bar to sharing the information. As argued by The Gambia in response to the opposition by Facebook in court, the act aims to protect the privacy of private individuals in the US and not the unlawful acts of state actors such as the Myanmar government. Moreover, it would not apply to information already removed from the system – which is much of what is being requested – given the prior removal for violating Facebook’s own terms and conditions.
The optics of not supporting the disclosure of evidence that may assist in establishing the crime of genocide are truly terrible. As bad, is the obfuscation that seems to accompany this position. Facebook, a company that has built its entire business model on monetising user data, is likely aware of this.
August marked the third anniversary of the mass exodus and atrocities committed against the Rohingya – a time for reflection – and a time to act in support of the survivors, in their quest for accountability and justice. Facebook must walk the talk now.
The views expressed in this article are the author’s own and do not necessarily reflect NRM’s editorial stance.
ABOUT THE AUTHOR
Priya Pillai is an international lawyer, and head of the Asia Justice Coalition secretariat.
Facebook and other social media companies are being scrutinised over how they handle misinformation.
With just two months left until the United States presidential election, Facebook says it is taking more steps to encourage voting, minimise misinformation and reduce the likelihood of post-election “civil unrest”.
The company said on Thursday it will restrict new political advertisements in the week before the election and remove posts that convey misinformation about COVID-19 and voting.
It will also attach links to official results to posts from candidates and campaigns that declare premature victories.
“This election is not going to be business as usual. We all have a responsibility to protect our democracy,” Facebook CEO Mark Zuckerberg said in a post. “That means helping people register and vote, clearing up confusion about how this election will work, and taking steps to reduce the chances of violence and unrest.”
Facebook and other social media companies are being scrutinised over how they handle misinformation, given issues with President Donald Trump and other candidates posting false information and Russia’s continuing attempts to interfere in US politics.
Facebook has long been criticised for not fact-checking political ads or limiting how they can be targeted at small groups of people.
The US elections are just two months away, and with Covid-19 affecting communities across the country, I’m concerned…
With the nation divided, and election results potentially taking days or weeks to be finalised, there could be an “increased risk of civil unrest across the country”, Zuckerberg said.
In July, Trump refused to publicly commit to accepting the results of the upcoming election, as he scoffed at polls that showed him lagging behind Democratic rival Joe Biden.
Trump has also made false claims that the increased use of mail-in voting because of the coronavirus pandemic allows for voter fraud. That has raised concerns over the willingness of Trump and his supporters to abide by election results.
Asked in an interview on Media (known to Noble Reporters Media) aired on Thursday if he had personally engaged with Trump on his posts about voting, Zuckerberg said he did not think he had recently.
But Zuckerberg said he had had “certain discussions with him in the past where I’ve told him that I thought some of the rhetoric was problematic”.
Under the new measures, Facebook says it will prohibit politicians and campaigns from running new election advertisements in the week before the election. However, they can still run existing advertisements and change how they are targeted.
Posts with obvious misinformation on voting policies and the coronavirus pandemic will also be removed, the company said.
Users can only forward articles to a maximum of five others on Messenger, Facebook’s messaging app. The company will also work with the Reuters news agency to provide official election results and make the information available both on its platform and with push notifications.
After being caught off-guard by Russia’s efforts to interfere in the 2016 US presidential election, Facebook, Google, Twitter and other companies put safeguards in place to prevent it from happening again.
That includes taking down posts, groups and accounts that engage in “coordinated inauthentic behavior” and strengthening verification procedures for political advertisements.
Last year, Twitter banned political advertisements altogether, while Alphabet’s Google limited the ways in which election advertisers could micro-target voters.
Zuckerberg said Facebook had removed more than 100 networks worldwide engaging in such interference over the last few years.
“Just this week, we took down a network of 13 accounts and two pages that were trying to mislead Americans and amplify division,” he said.
But experts and Facebook’s own employees say the measures are not enough to stop the spread of misinformation – including from politicians and in the form of edited videos.
Facebook had previously drawn criticism for its advertisement policy, which cited freedom of expression as the reason for letting politicians like Trump post false information about voting.
Social media giant faces scrutiny after media reports revealed it ignored anti-Muslim hate speech by BJP leaders.
An Indian parliamentary committee has grilled Facebook representatives after the social media giant was accused of bias and not acting against anti-Muslim posts on its platform.
The closed-door hearing on Wednesday followed accusations in newspaper reports that the social media giant was allowing hate speech on its platform and that its top policy official in India had shown favouritism towards Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP).
The social media giant has denied the allegations and the outcome of the hearing was unclear.
Facebook came under scrutiny after a series of reports by the United States-based Wall Street Journal (WSJ) showed the company ignored anti-Muslim hate speeches by BJP leaders while Facebook’s India policy chief, Ankhi Das, made decisions favouring Modi.
On Tuesday, New Delhi-based English daily the Indian Express reported that following a request from the party, Facebook removed pages critical of the BJP months before the 2019 general elections.
In email exchanges reported by the Express, the BJP had told Facebook the pages were “in violation of expected standards”, with posts that were “not in line with facts”.
Requests for comment from Media (known to Noble Reporters Media) to Facebook went unanswered.
India is Facebook’s biggest market with more than 300 million users while the company’s messaging app, WhatsApp, boasts 400 million users in the world’s second-most populous nation.
The BJP spends more than any political party in India on Facebook advertisements.
Dozens of Muslims have been lynched in the past six years by vigilantes, with many of the incidents triggered by fake news regarding cow slaughter or smuggling shared on WhatsApp.
The WSJ had reported last month that Das refused to apply the company’s hate speech policies to BJP politicians and other “Hindu nationalist individuals and groups”.
Facebook allowed anti-Muslim posts on its platform to avoid ruining the company’s relationship with the BJP, the WSJ said. Time Magazine made similar allegations last week.
Das last month apologised to Muslim staff for sharing a post that dubbed Muslims in India a “degenerate community”, according to a report by US media outlet BuzzFeed News.
Opposition attacks Facebook The Facebook deposition was originally slated for Tuesday but was deferred following the death of former Indian President Pranab Mukherjee.
The opposition Congress party said in a statement on Tuesday that there was a “blasphemous nexus between the BJP and Facebook”.
“The aim of the BJP is ‘divide and rule’ and the social media giant Facebook is helping them achieve this,” it said in the statement.
Opposition parliamentarian Derek O’ Brien, in a letter sent to Facebook CEO Mark Zuckerberg on Tuesday, also said there was “enough material in the public domain, including memos of senior facebook management (in India)” to show bias favouring the BJP.
Meanwhile, senior BJP leader and India’s communications minister Ravi Shankar Prasad claimed in a letter he sent to Zuckerberg that ahead of the 2019 national elections, “there was a concerted effort by Facebook … to not just delete pages or substantially reduce their reach but also offer no recourse or right of appeal to affected people who are supportive of right-of-centre ideology”.
Prasad also alleged in the letter that recent press reports were the result of “selective leaks … to portray an alternate reality”.
“This interference in India’s political process through gossip, whispers and innuendo is condemnable,” Prasad said.
Ajit Mohan, Facebook’s India chief, has defended the company’s actions and denied any bias. But the company also admitted it had to do better on tackling hate speech.
Right-wing bias? Facebook’s alleged favouritism towards India’s Hindu nationalists is not the first time the social media giant has been accused of tacitly supporting right-wing groups.
Last year, campaign group Avaaz said that the tech giant was failing to rein in a “tsunami” of hate posts inflaming ethnic tensions in India’s northeast state of Assam.
Avaaz said the dehumanising language – often targeting India’s Bengali Muslims – was similar to that used on Facebook about Myanmar’s mainly Muslim Rohingya before an army crackdown and ethnic violence forced 700,000 Rohingya to flee in 2017 to Bangladesh.
The platform has also come under fire in Myanmar over hate speech directed against the Rohingya over the past decade.
Investigators from the United Nations said Facebook played a key role in spreading hate speech that fuelled the violence.
The company admitted two years ago that it had been “too slow” to address the problem.
Also last month in the US, a Facebook engineer was reportedly fired for internal posts revealing that right-leaning groups and individuals in the US were given preferential treatment by preventing their posts from being removed, despite violating content rules.
Far-right news website Breitbart, non-profit group PragerU and Trump supporters Diamond and Silk, were some of the organisations and personalities favoured by Facebook, according to internal posts seen by Buzzfeed.
Platform, accused of fanning hate towards Rohingya, says it removed 280,000 posts that breached rules in second quarter.
Facebook said on Tuesday that it was preparing for Myanmar’s general election in November by improving the detection and removal of hate speech and content that incites violence, and preventing the spread of misinformation.
The company said in a blog that between now and November 22, it would remove “verifiable misinformation and unverifiable rumours” that are assessed as having the potential to suppress the vote or damage the “integrity” of the electoral process.
“For example, we would remove posts falsely claiming a candidate is a Bengali, not a Myanmar citizen, and thus ineligible,” Facebook said.
The platform has come under fire in Myanmar over hate speech directed against the mainly Muslim Rohingya over the past decade, including during the brutal military-led crackdown in 2017 that forced more than 730,000 Rohingya to flee the country. Investigators from the United Nations said Facebook played a key role in spreading hate speech that fuelled the violence.
The company admitted two years ago that it had been “too slow” to address the problem.
Facebook said it was working with two partners in Myanmar to verify the official Facebook pages of political parties. It now has three fact-checking partners in Myanmar: BOOM, AFP Fact Check and Fact Crescendo.
Violating standards It also said it introduced a new feature that limits the number of times a message can be forwarded to five.
The feature is now available in Myanmar and, over the next few weeks, will be made available to Messenger users worldwide, the company added in the blog.
This year’s elections in Myanmar, scheduled for November 8, will be the second since the generals who had led the country for decades ceded power while ensuring their continuing influence through a 25-percent quota of seats in parliament.
The first, in 2016, brought longtime pro-democracy campaigner Aung San Suu Kyi’s National League for Democracy (NLD) to power but, during the two-month campaign period, an estimated one million Rohingya were stripped of their right to vote.
Facebook and other social media platforms have faced criticism worldwide in recent years from activists, regulators and governments for the spread of misinformation, including during elections.
Australian government wants the social media giant to pay for local publishers’ content amid pressure from Murdoch.
Facebook Inc. plans to block people and publishers in Australia from sharing news, a move that pushes back against a proposed law forcing the company to pay media firms for their articles.
The threat escalates an antitrust battle between Facebook and the Australian government, which wants the social-media giant and Alphabet Inc.’s Google to compensate publishers for the value they provide to their platforms.
The legislation still needs to be approved by Australia’s parliament. Under the proposal, an arbitration panel would decide how much the technology companies must pay publishers if the two sides can’t agree.
Facebook said in a blog posting Monday that the proposal is unfair and would allow publishers to charge any price they want. If the legislation becomes law, the company says it will take the unprecedented step of preventing Australians from sharing news on Facebook and Instagram.
“This is a decision we’re making reluctantly,” said Campbell Brown, Facebook’s vice president of global news partnerships. “It is the only way to protect against an outcome that will hurt, not help Australia’s media outlets.”
Facebook is still working through the details of how it would block articles from being shared, she said.
‘Heavy-Handed Threats’ Responding to Facebook’s announcement, Australia Treasurer Josh Frydenberg said: “We don’t respond to coercion or heavy-handed threats wherever they come from.” Forcing digital platforms to pay for original content would help create “a more sustainable media landscape,” Frydenberg said in a statement.
The chairman of Australia’s competition regulator, Rod Sims, said Facebook’s threat was “ill-timed and misconceived.” The proposed legislation seeks to bring “fairness and transparency” to Facebook and Google’s relationships with Australian news businesses, Sims said in a statement.
Google has also raised alarms about Australia’s proposal. The measure “would force us to provide you with a dramatically worse” Google Search and YouTube, and “put the free services you use at risk in Australia,” Mel Silva, managing director of Google Australia and New Zealand, wrote in an open letter.
Media’s Struggles The Australian government has said it’s trying to level the playing field between the tech giants and a local media industry that’s struggling from the loss of advertising revenue to those companies. In May, for example, Rupert Murdoch’s News Corp. announced plans to cut jobs and close or stop printing more than 100 local and regional newspapers in Australia.
The Australian-born Murdoch has for years advocated that Facebook and Google pay for news articles that appear on their platforms. And News Corp. has lauded government efforts to force the two companies to pay for news.
Michael Miller, executive chairman of News Corp Australasia, was quoted widely as saying: “The tech platforms’ days of free-riding on other peoples’ content are ending. They derive immense benefit from using news content created by others and it is time for them to stop denying this fundamental truth.”
Yet Facebook’s decision to block news on its platform could prevent publishers from reaching a wider audience. In the first five months of 2020, the company said it sent 2.3 billion clicks from its News Feed to Australian news websites.
‘Insignificant’ Loss The decision could also limit the appeal of Facebook’s social-media platform to Australians who use it to read news. However, Brown said removing news articles from Facebook in Australia would be “insignificant” to its business because they are a small fraction of what users see.
Australia’s new rules are part of a global push by government agencies to regulate the tech giants. In some countries, officials are concerned not only that Facebook and Google are capturing much of the advertising dollars that have sustained journalism, but also with the types of articles getting shared. The stories that tend to go viral on Facebook are those that stoke emotion and divisiveness, critics argue.
In April, France’s antitrust regulator ordered Google to pay media companies to display snippets of articles. In June, Google said it would pay some media outlets that will be featured in a yet-to-be-released news service in Germany, Australia and Brazil.
Last October, Facebook introduced a separate news section, paying some publishers whose stories are featured. Brown declined to share numbers on the popularity of the Facebook News tab, but said nearly all of the readers are a new audience for publishers. Last week, Facebook said it plans to expand the news section to other markets globally.
Facebook on Thursday launched its voting information center as internet platforms unveiled fresh moves to protect the November US election from manipulation and interference.
The hub was described as central to defending against deception and confusion in what promises to be an election roiled by the pandemic and efforts to dupe voters.
The move comes amid a coordinated effort by Facebook, Google and other online platforms to curb the spread of disinformation and thwart efforts to manipulate voters.
Google separately announced new features for its search engine to provide detailed information about how to register and vote, directing users to local election administrators.
Google-owned YouTube will take down content aimed at manipulation, including “videos that contain hacked information about a political candidate shared with the intent to interfere in an election,” according to a statement.
YouTube will also remove videos promoting efforts to interfere with the voting process such as telling viewers to create long voting lines.
The announcements come a day after an industry group, members of which include Google, Microsoft, Reddit, Pinterest and Twitter, met with federal agencies including the FBI’s foreign influence task force to step up coordination on election interference.
A joint industry statement said the tech platforms, including the Wikimedia Foundation which operates Wikipedia, would be on the lookout for disinformation.
“We know that disinformation and misinformation are at their most virulent in an information vacuum,” Facebook head of security policy Nathaniel Gleicher said on a briefing call with reporters.
“Getting accurate information to voters is one of the best vaccines against disinformation campaigns.”
Facebook has set a goal of helping 4 million people registered to vote in the US.
The hub, which Facebook announced earlier this year, will be prominently positioned at Facebook and Instagram and “will serve as a one-stop-shop to give people in the US the tools and information they need to make their voices heard at the ballot box,” the social media giant said.
– ‘Hack-and-Leak’ –
Expected attacks include “hack-and-leak” tactics along the lines of what was used against Democratic presidential candidate Hillary Clinton in 2016, Facebook said.
The tactic typically involves state-sponsored actors giving hacked information to traditional media, then exploiting social media platforms to spread the stories, according to Gleicher.
“We know it is an effective technique,” Gleicher said.
Facebook said the voter hub will evolve with the election season, from focusing on registration and poll-worker volunteering matters to how to vote in the pandemic and then tallying of ballots.
“We know we can help millions of people access accurate, reliable information about the election,” vice president of social impact Naomi Gleit said on the briefing call.
“We firmly believe that voting is voice; the best way to hold our leaders accountable.”
Facebook and Instagram users can use the tool to check if they are registered to vote and how to do so if they are not.
“Voting alerts” on the social network which include updates on the election process will be restricted to “pages from a government authority,” Facebook said.
– Tally turbulence –
Facebook is expecting malicious actors to try to exploit uncertainly about the election process or promote violence while votes are being counted, which is expected to take longer than usual due to the pandemic prompting more people to vote by mail.
The social network has created “red teams” and conducted internal exercises to prepare, according to Gleicher.
US President Donal Trump has made unsubstantiated claims about the reliability of voting by mail, which he has done himself.
Facebook’s latest moves come amid concerns over campaigns by governments aimed at influencing elections and public sentiment in other countries through media outlets that disguise their true origins.
State-led influence campaigns were prominent on social media during the 2016 US election and have been seen around the world.
Facebook on Monday said it has created a new unit devoted to financial services to harmonize payment systems on its platform.
The new group, called Facebook Financial, will be headed by e-commerce veteran David Marcus, who was a president at PayPal before joining the leading social network six years ago.
Marcus is one of the creators of Facebook’s digital money network Libra, and heads the team building a Novi digital wallet tailored for the currency.
The Novi wallet — set to launch when Libra coins debut — promises to give Facebook opportunities to build financial services into its offerings, offer to expand its own commerce and let more small businesses buy ads on the social network.
Facebook Financial will handle management and strategy for all payments and money services across the Silicon Valley company’s platform.
“Today various payments features exist across our apps, and we want to make sure decision making, execution and compliance are not fragmented,” Facebook said in an email reply to an AFP inquiry.
“We want to be able to give people the ability to make a payment however they choose — debit, credit or Libra digital currencies.”
Noting security concerns posed by Facebook’s yet-to-be-launched digital currency Libra, the Federal Reserve last week revealed plans for its own instant payments system.
FedNow will provide households and businesses with instant access to payments, for wages, government benefits or sales, without waiting days for checks to clear, the Fed said.
The system, which is not due to launch for two to three years, “will be designed to maintain uninterrupted 24x7x365 processing with security features to support payment integrity and data security,” the central bank said.
Facebook’s announcement last year of plans to design the Libra cryptocurrency and payments system raised immediate red flags for global finance officials who expressed a barrage of withering criticism about the security and reliability of a private network.
Facebook said Wednesday it had removed a post from the page of US President Donald Trump over what it called “harmful COVID misinformation.”
The post was a video clip from an interview in which Trump contended that children are “almost immune” from the deadly virus.
“This video includes false claims that a group of people is immune from COVID-19 which is a violation of our policies around harmful COVID misinformation,” a Facebook spokesperson told Media (known to Noble Reporters Media).
Facebook condemned Saturday what it called an “extreme” ruling by a Brazilian Supreme Court judge ordering it to block the accounts of 12 high-profile allies of President Jair Bolsonaro, which it vowed to appeal.
Brazil’s Supreme Court is overseeing an investigation into allegations that members of the far-right president’s inner circle ran a social media campaign to discredit the court, as well as slander and threaten its judges.
As part of that probe, Justice Alexandre de Moraes ordered Facebook to suspend the accounts of 12 Bolsonaro allies, and Twitter another 16 accounts.
The US social media giants complied on July 25 — but initially only blocked visitors in Brazil from viewing the accounts.
The blocked users soon skirted the ban by telling their followers how to change their account settings to another country.
Moraes then ordered the US social media giants Thursday to enforce the suspension worldwide.
When Facebook did not initially comply, saying it would appeal to the full Supreme Court, Moraes fined the company 1.9 million reals ($365,000) and issued a summons for its top executive in Brazil, Conrado Lester.
“This new legal order is extreme, posing a threat to freedom of expression outside of Brazil’s jurisdiction and conflicting with laws and jurisdictions worldwide,” Facebook said in a statement.
“Given the threat of criminal liability to a local employee, at this point we see no other alternative than complying with the decision by blocking the accounts globally, while we appeal to the Supreme Court.”
The row comes as Facebook and Twitter face increasing pressure in the United States and around the world to act more aggressively against hate speech and false information on their platforms.
In Brazil, it is part of ongoing tension between Bolsonaro and the high court, which has also ordered a probe into allegations the president obstructed justice to protect members of his inner circle from police investigations.
The affected accounts include high-profile figures such as conservative former lawmaker Roberto Jefferson, business magnate Luciano Hang and far-right activist Sara Winter.
Amazon, Apple, Google and Facebook are too powerful and will likely emerge from the coronavirus pandemic even stronger, the head of a US congressional antitrust committee said Wednesday at a high-stakes hearing featuring the CEOs of the four US tech giants.
“Simply put, they have too much power,” Representative David Cicilline said in his opening remarks at the hearing expected to feature a grilling of the leaders of the technology companies.
“Whether it’s through self-preferencing, predatory pricing, or requiring users to buy additional products, the dominant platforms have wielded their power in destructive, harmful ways in order to expand,” the Democrat from Rhode Island said.
“Prior to the COVID-19 pandemic, these corporations already stood out as titans in our economy,” Cicilline said. “In the wake of COVID-19, however, they are likely to emerge stronger and more powerful than ever before.”
Republican Representative Jim Sensenbrenner struck a more moderate tone, saying “being big is not inherently bad.”
“Quite the opposite, in America you should be rewarded for success,” Sensenbrenner said.
The unprecedented joint appearance — remotely by video — before the House Judiciary subcommittee features Tim Cook of Apple, Jeff Bezos of Amazon, Mark Zuckerberg of Facebook and Sundar Pichai of Google and its parent firm Alphabet.
The hearing is part of a probe into the competitive market landscape and antitrust law, but questioning is likely to veer into other areas such as hate speech and content moderation, economic inequality, privacy and data protection and even claims of political “bias” from President Donald Trump and his allies.
Ahead of the hearing, the top executives of the firms sought to offer an upbeat assessment of the tech landscape, highlighting their roots and values and how the companies have benefitted average Americans.
“An important way we contribute is by building products that are helpful to American users in moments big and small, whether they are looking for a faster route home, learning how to cook a new dish on YouTube, or growing a small business,” Pichai said in his remarks.
Only in America Cook said Apple is “a uniquely American company whose success is only possible in this country,” and that the California giant is “motivated by the mission to put things into the world that enrich people’s lives.”
Bezos, in his first appearance before a congressional committee, spoke of his modest upbringing and initial backing from his parents to start Amazon and its early losses of billions of dollars.
“I walked away from a steady job into a Seattle garage to found my startup, fully understanding that it might not work,” said Bezos, who is the world’s richest person based on his Amazon stake.
Zuckerberg called social media colossus Facebook a “proudly American company” and added that “our story would not have been possible without US laws that encourage competition and innovation.”
But hours ahead of the hearing, fast-growing video app TikTok accused Facebook of “maligning attacks” that are part of a movement “disguised as patriotism and designed to put an end to our very presence in the US.”
TikTok welcomes “fair competition” chief executive Kevin Mayer said in a blog post while adding that “without TikTok, American advertisers would again be left with few choices.”
Political theatre? The hearing is part of a congressional probe into “online platforms and market power” and takes place against a backdrop of antitrust investigations in the United States, Europe and elsewhere.
Current US antitrust laws make it difficult for enforcers to target companies simply for being large or dominant without also showing harm to consumers or abuse of market power.
The committee, however, could lay a blueprint for antitrust in the digital era that would require a fundamental rewrite of the century-old competition rules.
Some analysts say the hearing could offer fresh insights into how Big Tech “platforms” squelch competition by buying rivals or copying products from rivals.
Apple’s hefty commissions for its App Store and Amazon’s dealings with third-party sellers are expected to come under scrutiny.
Google, which has faced antitrust investigations in Europe, will face questions on whether it favours its own services to the detriment of rivals and Facebook for its dominance of the social media landscape including its acquisitions of Instagram and WhatsApp.
Robert Atkinson of the Information Technology and Innovation Foundation, a think tank often aligned with the sector, said the political theatre may distract from the issue of competition and focus on issues “from privacy to political speech.”
“These companies create enormous value for hundreds of millions of users and small businesses,” Atkinson said. “Congress shouldn’t twist antitrust law to launch an ill-defined broadside on internet platforms as a class.”
Mark Zuckerberg announces decision after backlash over decision to not moderate controversial messages posted by Trump.
Mark Zuckerberg, chief executive of Facebook, promised to review the social network’s policies that led to its decision to not moderate controversial messages posted by US president that appeared to encourage violence against those protesting police racism.
Protests continue over police brutality as several United States cities hold memorials to honour George Floyd, an unarmed Black man who was killed by police in Minneapolis, Minnesota, on May 25.
California governor has ordered the state yolice training programme to stop teaching officers how to use a hold that can block the flow of blood to the brain.
Seattle’s mayor has banned the city’s police force from using tear gas on protests.
Facebook chief Mark Zuckerberg has defended himself for not condemning remarks by U.S. President Donald Trump appearing to threaten violent retribution against demonstrators protesting the death of a black man at the hands of police, according to media reported on Tuesday.
In a tweet that was also visible on his Facebook page, Trump slammed demonstrators protesting the death of George Floyd in Minneapolis as “THUGS” and appeared to promote a violent response by saying, “when the looting starts, the shooting starts.”
That phrase was first used by a Miami police chief in 1967 to justify a violent crackdown on black neighborhoods.
Twitter placed a “public interest notice” on the post for violating the platform’s rules “about glorifying violence,” a move not mirrored by Facebook.
Online magazine The Verge reported that Zuckerberg held a long conference call with Facebook employees and addressed accusations that the social media platform allowed election misinformation and veiled promotions of violence from Trump.
Zuckerberg told employees he should have offered them more transparency, The Verge reported citing a recording of the meeting.
But he stood by what he termed a “pretty thorough” evaluation of Trump’s posts, saying the choice to avoid labeling or removing them was difficult but correct.
According to the recording, Zuckerberg described being upset by some of Trump’s recent posts, including the one regarding looting.
“(But) I knew that I needed to separate out my personal opinion … from what our policy is and the principles of the platform we’re running are.”
Several Facebook employees have resigned over the lack of action, NobleReporters learnt
One said publicly that the company would end up “on the wrong side of history.”
Mark Zuckerberg has revealed plans to use his platform to combat racial injustice as he spoke about racism and police brutality.
Mark Zuckerberg reveals plans to use Facebook to combat racial injustice
The Facebook founder made this known as he mourned the murder of, not just George Floyd, but also Breonna Taylor, Ahmaud Arbery and others.
Mark Zuckerberg reacted to the deaths of a number of African Americans resulting from police brutality and he disclosed the efforts he’s making to use his platform to combat racial injustice.
He condemned police brutality saying it “reminds us how far our country has to go to give every person the freedom to live with dignity and peace.” He wrote;
“The pain of the last week reminds us how far our country has to go to give every person the freedom to live with dignity and peace. It reminds us yet again that the violence Black people in America live with today is part of a long history of racism and injustice. We all have the responsibility to create change.
“We stand with the Black community — and all those working towards justice in honor of George Floyd, Breonna Taylor, Ahmaud Arbery and far too many others whose names will not be forgotten.
“To help in this fight, I know Facebook needs to do more to support equality and safety for the Black community through our platforms. As hard as it was to watch, I’m grateful that Darnella Frazier posted on Facebook her video of George Floyd’s murder because we all needed to see that. We need to know George Floyd’s name. But it’s clear Facebook also has more work to do to keep people safe and ensure our systems don’t amplify bias.
“The organizations fighting for justice also need funding, so Facebook is committing an additional $10 million to groups working on racial justice. We’re working with our civil rights advisors and our employees to identify organizations locally and nationally that could most effectively use this right now.
“I know that $10 million can’t fix this. It needs sustained, long term effort. One of the areas Priscilla and I have personally worked on and where racism and racial disparities are most profound is in the criminal justice system. I haven’t talked much about our work on this, but the Chan Zuckerberg Initiative has been one of the largest funders, investing ~$40 million annually for several years in organizations working to overcome racial injustice. Priscilla and I are committed to this work, and we expect to be in this fight for many years to come. This week has made it clear how much more there is to do.
“I hope that as a country we can come together to understand all of the work that is still ahead and do what it takes to deliver justice — not just for families and communities that are grieving now, but for everyone who carries the burden of inequality.”
Facebook has agreed to pay a Can$9 million (US$6.5 million) fine for making false or misleading claims about its privacy settings, Canada’s competition watchdog announced Tuesday.
An investigation of the social media network’s practices from 2012 to 2018 found that the company gave Canadians the impression that users could control who saw their personal information on Facebook and Messenger.
But it allowed their data to be shared with third party developers, the Competition Bureau said in a statement.
“Canadians expect and deserve truth from businesses in the digital economy, and claims about privacy are no exception,” said competition commissioner Matthew Boswell.
He noted that Facebook had vowed publicly to stop the practice in 2015 but continued to allow third-party access to its users messages and posts until 2018.
As part of the settlement, Facebook has agreed not to make false or misleading representations about the disclosure of personal information, and pay the Competition Bureau’s Can$500,000 investigation costs.
The company has about 24 million users in Canada.
In February, Canada’s privacy commissioner took Facebook to court for violating privacy laws. The company has called it overreach and has asked a judge to quash the case.
Facebook will allow users in the United States and Canada to transfer photos and videos to a rival tech platform for the first time – a step that could assuage antitrust concerns by giving users an option to easily leave the company’s services, the social media network said on Thursday.
The tool lets Facebook users transfer data stored on its servers directly to another photo storage service, in this case Google Photos – a feature known as data portability.
U.S. and Canadian users will be able to access the tool through their Facebook accounts starting Thursday. The function has already been launched in several countries including in Europe and Latin America.
It allows the social media company to give users more control over their data and respond to U.S. regulators and lawmakers who are investigating its competitive practices and allegations it has stifled competition.
The U.S. launch also comes ahead of a hearing set up by the Federal Trade Commission on Sept. 22 to examine the potential benefits and challenges of data portability. Control of data that hurts competition has become a critical topic in the antitrust debate in the United States and Europe.
Facebook’s Director of Privacy and Public Policy Steve Satterfield said over the past couple of years, the company heard calls from policymakers and regulators asking it to facilitate choice, make it easier for people to choose new providers and move their data to new services.
“So it really is an important part of the response to the kinds of concerns that drive antitrust regulation or competition regulation,” Satterfield told Reuters in an interview.
He said the company would be open to participating in the FTC hearing if the agency approaches them.
Data portability is a requirement under Europe’s privacy law called the General Data Protection Regulation (GDPR) and California’s privacy law called the California Consumer Protection Act (CCPA).
Also, Democratic Senators Richard Blumenthal of Connecticut and Mark Warner of Virginia along with Republican senator Josh Hawley of Missouri introduced a bill, known as the ACCESS Act, in October, which requires large tech platforms to let their users easily move their data to other services.
Satterfield said Facebook hopes to eventually allow users to move key data such as their contacts, friend lists etc onto another platform in a way that protects user privacy.
Facebook developed its data portability tool as a member of the Data Transfer Project – which was formed to allow web users to easily move their data between online service providers whenever they want – and counts Facebook, Alphabet’s Google, Microsoft, Twitter and Apple among its contributors.
Members of the project are also looking at letting users transfer data such as emails, playlists and events in the future, the company said.
On a call with academics and policy experts from the fields of competition and privacy on Wednesday, Facebook said it is moving deliberately on data transfer partnerships with third-parties to avoid a repeat of the Cambridge Analytica incident.
The now defunct British political consulting firm harvested the personal data of millions of Facebook users without their consent and used it for political advertising.
Google has launched a journalism emergency relief fund to support thousands of small, medium and local news publishers globally amid the coronavirus pandemic.
Through its Google News Initiative (GNI), the fund will be made available to news organisations producing original news for local communities during the COVID-19 crisis period.
The search giant explained that the fund will range from the low thousands of dollars for small hyper-local newsrooms to low tens of thousands for larger newsrooms, with variations per region.
According to a statement issued by Richard Gingras, Google’s VP, news, applications for the fund started on April 14, and would close by April 29.
He added that interested organisations can apply for it here.
“We’ve made this as streamlined as possible to ensure we get help to eligible publishers all over the world. At the end of the process, we’ll announce who has received funding and how publishers are spending the money,” the statement read.
Google said it is also giving $1 million collectively to the International Centre for Journalists, which plans to provide immediate resources to support reporters globally, and the Columbia Journalism School’s Dart Center for Journalism and Trauma, which is helping journalists exposed to traumatic events experienced during the crisis.
Gingras said this would further consolidate other efforts made by the search engine to support the media industry and connect people to quality information at this time of need.
“We believe it is important to do what we can to alleviate the financial pressures on newsrooms, and will continue to look at other ways to help with more to announce soon,” he said.
FJP and EJC announce the European journalism COVID-19 support fund
The European Journalism Centre (EJC) and the Facebook Journalism Project (FJP) have also launched a $3 million fund to support hundreds of community, local and regional European news organisations continue to cover the coronavirus health crisis.
According to a statement on Wednesday, EJC said applications for the fund will open on April 16, adding that it involves three categories namely: engagement, emergency and innovative funds respectively.
“As people turn to local news for critical information on how to keep their friends, families and communities safe, we understand these journalists are hit especially hard in the current economic crisis,” the statement added.
“EJC will direct emergency funds via the $3 million that Facebook is investing to small and mid-sized news organisations and journalists most in need in the hardest hit countries across Europe, in order to support their businesses and reporting when we need them most.”
Facebook, world’s biggest social network, says it will start removing false contents aimed at misinforming people about the spread of coronavirus.
In a statement released on its blog, the social media giant said the move was necessary following the declaration of global emergency on the deadly virus by the World Health Organisation (WHO) on Thursday.
Facebook stated its support to the ongoing global campaign against the outbreak of the decease would be in three forms.
This, it said, will include: “Limiting misinformation and harmful content, providing helpful information and support, and empowering partners with data tools.”
“As the global public health community works to keep people safe, Facebook is supporting their work in several ways, most especially by working to limit the spread of misinformation and harmful content about the virus and connecting people to helpful information,” the statement read.
“Our global network of third-party fact-checkers are continuing their work reviewing content and debunking false claims that are spreading related to the coronavirus.
“When they rate information as false, we limit its spread on Facebook and Instagram and show people accurate information from these partners. We also send notifications to people who already shared or are trying to share this content to alert them that it’s been fact-checked.
“We will also start to remove content with false claims or conspiracy theories that have been flagged by leading global health organizations and local health authorities that could cause harm to people who believe them. We are doing this as an extension of our existing policies to remove content that could cause physical harm.
“We’re focusing on claims that are designed to discourage treatment or taking appropriate precautions. This includes claims related to false cures or prevention methods — like drinking bleach cures the coronavirus — or claims that create confusion about health resources that are available.
“We will also block or restrict hashtags used to spread misinformation on Instagram, and are conducting proactive sweeps to find and remove as much of this content as we can.”
The Tarkwa Bay Beach is touted as one of the cleanest and most peaceful beaches in Lagos. A major tourist site that sees a lot of visits and offers a getaway from the hustle of Lagos, may no longer be available to the public.
The Tarkwa Bay island beach has been in existence since 1960. On the outskirts of the island is a 110 year old Lighthouse on Lighthouse Beach.
Tarkwa Bay has remained largely undeveloped in spite of its tourist attractions and the rate of development in other parts of the area. Opposite the bay is the Eko Atlantic Housing project.
There’s a community of people who live on the island. Occupations on the island include Fishing, Water transportation, Petty trading, Chalet Keepers, Gardeners, Civil Servants, Oil Workers (NNPC), Vessel Workers, Canopy Rentals, Mini Restaurant / Bar Operators, Security Guards, Fashion Designers, Baggage Handlers, Craftsmen and general artisans.
On January 21 2020, officials of the Nigerian Navy stormed into the Tarkwa Bay community, firing shots into the air and gave the residents one hour to vacate the island.
The residents were being evicted on claims of vandalism and illegal drilling of pipelines. There were openly dug pits across the edge of the island, containing oil and water. Some reports say that the vandalism occurred off the island, but the residents did not report the illegal acts.
The Beach is only accessible by ferries and canoes, these poor residents had to quickly evacuate the island without having enough notice from the naval officers who chased them out.
All these people have been displaced with no alternative source of accommodation or compensation provided by the government following the evacuation. This is an infringement of the basic rights of the residents. It is also against international treaties that Nigeria has agreed to be bound by.
The beautiful island of Tarkwa bay that has been a favourite getaway spot for many Lagosians will no longer be available for the public.
These people lost not only their homes but also their means of livelihood. Those who were fishermen have lost access to the water.
The Tarkwa Bay beach is not the first to suffer this kind of illegal eviction, several other communities like this have been affected by the eviction actions of Nigerian Navy officials.
Facebook has become the first big US firm to tell staff to avoid travelling to China, as the death toll from the coronavirus rises.
The tech giant said it was acting “out of an abundance of caution” to protect its employees.
Other global companies have introduced travel restrictions and car makers are taking staff out of the country.
More than 100 people have died of the disease, and confirmed cases have risen above 4,500.
China has imposed further restrictions on travel in and out of Hubei province, where the virus originated, as it tries to curb the spread of the virus and transport links in and out of the capital Wuhan is effectively in lockdown.
Some domestic firms have responded by extending the Chinese New Year holiday and asking staff to work from home.
Facebook, which has a division in China selling products such as Oculus virtual reality headsets, has asked employees to halt non-essential travel to mainland China and told employees who had travelled there to work from home.
“We have taken steps to protect the health and safety of our employees,” a spokesman for the social media giant said.
• Death toll from China virus outbreak passes 100
• Coronavirus: How worried should we be?
• China coronavirus: Your questions answered
The US government has asked Americans to “reconsider” planned visits to China and is advising against travel to Hubei province. Washington has also said it plans to fly consular staff and US citizens out of Wuhan. Governments of several countries are considering helping their nationals to leave the city.
South Korean companies are also taking steps to protect their employees from the potential infection.
Home appliances company LG has put a complete ban on travel to China and has advised employees on business trips in the country to return home as quickly as possible. Chipmaker SK Hynix has urged staff to avoid all non-essential travel to China.
Several international car companies have production sites around Wuhan, which is China’s seventh biggest city and a major motor manufacturing hub.
French car making group PSA, which owns the brands Peugeot and Citroen, has said it would bring French staff and their family members, a total of 38 people, out of Wuhan.
Japan’s Honda Motor, which also operates in Wuhan, has said it is planning to fly about 30 of its Japanese staff home.
Nissan, which is building a plant with Wuhan-based Dongfeng, has also said it will evacuate most of its Japanese staff and their families from the city.
Car makers are also being affected by the Chinese authorities’ decision to delay the reopening of their businesses after the Lunar New Year holiday.
In Shanghai, Tesla, General Motors, and Volkswagen have all been affected by the city’s government extending the break to 9 February.
All three companies have either their own factories in the city or operate plants through ventures with local partners.
In Wuhan, travel from the city of 11 million people has been severely restricted and non-essential vehicles have been banned from the roads. However the city’s mayor said millions of travellers had already left the city ahead of the holidays, before the lockdown was implemented.
Across China, other several major cities have suspended public transport systems, taxis and ride-hailing services