Facebook CEO and co-founder Mark Zuckerberg James Martin/CNET Facebook is making changes to a range of ads on the social network in an effort to protect its users who are searching for a job or housing from discrimination. Advertisers that run housing, employment and credit ads will no longer be able to target users based on age, gender or ZIP code, and will also have fewer options when it comes to targeting users, Facebook said Tuesday. The company also said it’s building a tool so users can search for housing ads throughout the US. “Housing, employment and credit ads are crucial to helping people buy new homes, start great careers, and gain access to credit,” said Facebook COO Sheryl Sandberg in a blog post. “They should never be used to exclude or harm people.”The changes are part of a settlement that Facebook reached with civil rights groups including the American Civil Liberties Union, which filed lawsuits against the social network alleging that Facebook allowed advertisers to discriminate against users by excluding people from seeing certain housing, employment and credit ads based on gender, age and where they lived. Civil rights groups, labor organizations, workers and consumers filed five discrimination lawsuits against Facebook between 2016 and 2018, according to the settlement posted by the ACLU. The changes also affect ads placed on Facebook-owned Instagram and messaging app Messenger. Facebook has been under mounting pressure to change its advertising tools after ProPublica reported in 2016 that the world’s largest social network allowed advertisers to place housing ads that excluded users by race, which is illegal under federal law. In response, Facebook pulled a tool that allowed advertisers to exclude users from seeing housing, employment and credit ads based on their “ethnic affinity.”But Facebook continued to receive more complaints that its advertising tools were also being used by employers to post job ads that excluded women or older workers. “Discrimination in advertising for jobs, housing and credit has been unlawful since the 1960s and ’70s with the enactment of our civil rights laws,” said Galen Sherwin, senior staff attorney at the ACLU during a conference call. “But the ability to target ads to users based on their data and online behavior has been threatening to give this kind of discrimination a new life in the 21st century.” Sherwin said advertisers will have to certify whether they’re placing a housing, employment or credit ad on Facebook. Targeting options for these advertisers will decrease from tens of thousands to a few hundred, she said. These advertisers, for example, won’t be able to target soccer moms, new dads and wheelchair users or people who are similar to their current customers. And they won’t be able to target users less than 15 miles away from a specific address or the center of a city, according to the ACLU.Civil rights groups will also be testing housing, employment and credit ads on Facebook to make sure the company implements the changes it’s promising. Advocacy groups will also be on the lookout for any advertisers who are trying to skirt Facebook’s new ad rules. A Facebook spokesperson said the company doesn’t break out how many housing, employment or credit ads are placed on the social network every month or year. Originally published March 19, 11:39 a.m. PTUpdate, 2:09 p.m. PT: Includes remarks from ACLU’s conference calls, details about the settlement and comment from a Facebook spokesperson. Now playing: Watch this: 3 Comments Tags Facebook is a moneymaking machine 2:28 Facebook Share your voice Internet Services Tech Industry Mobile Apps
Kashmir’s most wanted militant and Burhan Wani’s successor Zakir Musa killed in Tral Close IBTimes VideoRelated VideosMore videos Play VideoPauseMute0:02/1:14Loaded: 0%0:02Progress: 0%Stream TypeLIVE-1:12?Playback Rate1xChaptersChaptersDescriptionsdescriptions off, selectedSubtitlessubtitles settings, opens subtitles settings dialogsubtitles off, selectedAudio Trackdefault, selectedFullscreenThis is a modal window.Beginning of dialog window. Escape will cancel and close the window.TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaqueFont Size50%75%100%125%150%175%200%300%400%Text Edge StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall CapsReset restore all settings to the default valuesDoneClose Modal DialogEnd of dialog window. COPY LINKAD Loading … The operation turned into a gunfight after the militants opened fire at the security forces in the early hours of Tuesday.Two militants were gunned down in an encounter that broke out between security forces and militants in Jammu and Kashmir’s Shopian district, on Tuesday, June 11. The militants belong to Ansar Ghazwatul Hind (AGH) outfit, according to the police.”The slain militants have been identified as Shakir Ahmad of Shopian and Sayar Bhat of Kulgam district. Both of them were affiliated with AGH outfit,” the police reportedly said.The security forces launched a search and cordon operation in Anweera area of Shopian on Monday evening, after receiving a tip-off about militants being present in the area. The operation turned into a gunfight after the militants opened fire at the security forces in the early hours of Tuesday.”When challenged today morning, the hiding militants opened fire, triggering an encounter in which two militants were killed,” police was quoted as saying in an IANS report.Although the firing has stopped, the search operation is still underway.The AGH chief Zakir Musa was killed by security forces in Dadsara village of Pulwama area on May 24 this year. He was a close associate of Burhan Wani who was killed in 2017, was a militant of Hizbul Mujahideen who succeeded Burhan and later headed the Al Qaeda affiliate AGH. He had taken to militancy in 2013.The killing of Zakir Musa was seen as a big success for the security forces in their anti-militancy operations in Kashmir.
A six-kilometre long queue of stationary vehicles was created at Paturia ferry terminal in Shivalaya upazila of Manikganj district on Friday morning, causing immense sufferings to Eid holidaymakers, reports UNB.Several hundred buses, private cars and microbuses got stuck with the homegoers on board at the ferry terminal, said Ajmal Hossain, deputy general manager (Aricha region) of Bangladesh Inland Water Transport Authority (BIWTA). Nineteen ferries are in operation at the ghat to carry the vehicles, goods and people to southern districts, said Ajmal, adding that things will return to normalcy if all the ferries run smoothly.
Copyright 2012 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. (PhysOrg.com) — In the future according to robotics researchers, robots will likely fight our wars, care for our elderly, babysit our children, and serve and entertain us in a wide variety of situations. But as robotic development continues to grow, one subfield of robotics research is lagging behind other areas: roboethics, or ensuring that robot behavior adheres to certain moral standards. In a new paper that provides a broad overview of ethical behavior in robots, researchers emphasize the importance of being proactive rather than reactive in this area. Citation: How to make ethical robots (2012, March 12) retrieved 18 August 2019 from https://phys.org/news/2012-03-ethical-robots.html The authors, Ronald Craig Arkin, Regents’ Professor and Director of the Mobile Robot Laboratory at the Georgia Institute of Technology in Atlanta, Georgia, along with researchers Patrick Ulam and Alan R. Wagner, have published their overview of moral decision making in autonomous systems in a recent issue of the Proceedings of the IEEE.“Probably at the highest level, the most important message is that people need to start to think and talk about these issues, and some are more pressing than others,” Arkin told PhysOrg.com. “More folks are becoming aware, and the very young machine and robot ethics communities are beginning to grow. They are still in their infancy though, but a new generation of researchers should help provide additional momentum. Hopefully articles such as the one we wrote will help focus attention on that.”The big question, according to the researchers, is how we can ensure that future robotic technology preserves our humanity and our societies’ values. They explain that, while there is no simple answer, a few techniques could be useful for enforcing ethical behavior in robots.One method involves an “ethical governor,” a name inspired by the mechanical governor for the steam engine, which ensured that the powerful engines behaved safely and within predefined bounds of performance. Similarly, an ethical governor would ensure that robot behavior would stay within predefined ethical bounds. For example, for autonomous military robots, these bounds would include principles derived from the Geneva Conventions and other rules of engagement that humans use. Civilian robots would have different sets of bounds specific to their purposes.Since it’s not enough just to know what’s forbidden, the researchers say that autonomous robots must also need emotions to motivate behavior modification. One of the most important emotions for robots to have would be guilt, which a robot would “feel” or produce whenever it violates its ethical constraints imposed by the governor, or when criticized by a human. Philosophers and psychologists consider guilt as a critical motivator of moral behavior, as it leads to behavior modifications based on the consequences of previous actions. The researchers here propose that, when a robot’s guilt value exceeds specified thresholds, the robot’s abilities may be temporarily restricted (for example, military robots might not have access to certain weapons). Explore further Though it may seem surprising at first, the researchers suggest that robots should also have the ability to deceive people – for appropriate reasons and in appropriate ways – in order to be truly ethical. They note that, in the animal world, deception indicates social intelligence and can have benefits under the right circumstances. For instance, search-and-rescue robots may need to deceive in order to calm or gain cooperation from a panicking victim. Robots that care for Alzheimer’s patients may need to deceive in order to administer treatment. In such situations, the use of deception is morally warranted, although teaching robots to act deceitfully and appropriately will be challenging.The final point that the researchers touch on in their overview is ensuring that robots – especially those that care for children and the elderly – respect human dignity, including human autonomy, privacy, identity, and other basic human rights. The researchers note that this issue has been largely overlooked in previous research on robot ethics, which mostly focuses on physical safety. Ensuring that robots respect human dignity will likely require interdisciplinary input.The researchers predict that enforcing ethical behavior in robots will face challenges in many different areas.“In some cases it’s perception, such as discrimination of combatant or non-combatant in the battlespace,” Arkin said. “In other cases, ethical reasoning will require a deeper understanding of human moral reasoning processes, and the difficulty in many domains of defining just what ethical behavior is. There are also cross-cultural differences which need to be accounted for.”An unexpected benefit from developing an ethical advisor for robots is that the advising might assist humans when facing ethically challenging decisions, as well. Computerized ethical advising already exists for law and bioethics, and similar computational machinery might also enhance ethical behavior in human-human relationships.“Perhaps if robots could act as role models in situations where humans have difficulty acting in accord with moral standards, this could positively reinforce ethical behavior in people, but that’s an unproven hypothesis,” Arkin said. Researchers give robots the capability for deceptive behavior RI-MAN, a robot developed by researchers at RIKEN in Japan, was designed for human care. Image credit: RIKEN, Bio-Mimetic Control Research Center More information: Ronald Craig Arkin, et al. “Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception.” Proceedings of the IEEE. Vol. 100, No. 3, March 2012. DOI: 10.1109/JPROC2011.2173265 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.